UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Brain connectivity network modeling using fMRI signals Liu, Aiping 2016

Your browser doesn't seem to have a PDF viewer, please download the PDF to view this item.

Notice for Google Chrome users:
If you are having trouble viewing or searching the PDF with Google Chrome, please download it here instead.

Item Metadata


24-ubc_2016_september_liu_aiping.pdf [ 11.07MB ]
JSON: 24-1.0302072.json
JSON-LD: 24-1.0302072-ld.json
RDF/XML (Pretty): 24-1.0302072-rdf.xml
RDF/JSON: 24-1.0302072-rdf.json
Turtle: 24-1.0302072-turtle.txt
N-Triples: 24-1.0302072-rdf-ntriples.txt
Original Record: 24-1.0302072-source.json
Full Text

Full Text

Brain Connectivity Network Modeling using fMRI SignalsbyAiping LiuB.Sc., University of Science and Technology of China, 2009M.A.Sc., The University of British Columbia, 2011A THESIS SUBMITTED IN PARTIAL FULFILLMENTOF THE REQUIREMENTS FOR THE DEGREE OFDoctor of PhilosophyinTHE FACULTY OF GRADUATE AND POSTDOCTORALSTUDIES(Electrical and Computer Engineering)The University of British Columbia(Vancouver)May 2016c© Aiping Liu, 2016AbstractFunctional magnetic resonance imaging (fMRI) is one of the most popular non-invasive neuroimaging technologies, which examines human brain at relativelygood spatial resolution in both normal and disease states. In addition to the in-vestigation of local neural activity in isolated brain regions, brain connectivity es-timated from fMRI has provided a system-level view of brain functions. Despiterecent progress on brain connectivity inference, there are still several challenges.Specifically, this thesis focuses on developing novel brain connectivity modelingapproaches that can deal with particular challenges of real biomedical applica-tions, including group pattern extraction from a population, false discovery ratecontrol, incorporation of prior knowledge and time-varying brain connectivity net-work modeling.First, we propose a multi-subject, exploratory brain connectivity modeling ap-proach that allows incorporation of prior knowledge of connectivity and determi-nation of the dominant brain connectivity patterns among a group of subjects. Fur-thermore, to integrate the genetic information at the population level, a frameworkfor genetically-informed group brain connectivity modeling is developed.We then focus on estimating the time-varying brain connectivity networks. Thetemporal dynamics of brain connectivity assess the brain in the additional temporaldimension and provide a new perspective to the understanding of brain functions.In this thesis, we develop a sticky weighted time-varying model to investigate thetime-dependent brain connectivity networks. As the brain must strike a balancebetween stability and flexibility, purely assuming that brain connectivity is static ordynamic may be unrealistic. We therefore further propose making joint inferenceof time-invariant connections and time-varying coupling patterns by employing aiimultitask learning model.The above proposed methods have been applied to real fMRI data sets, and thedisease induced changes on the brain connectivity networks have been observed.The brain connectivity study is able to provide deeper insights into neurologicaldiseases, complementing the traditional symptom-based diagnostic methods. Re-sults reported in this thesis suggest that brain connectivity patterns may serve aspotential disease biomarkers in Parkinson’s Disease.iiiPrefaceThe research work in this thesis was jointly initiated by Dr. Z. Jane Wang, Dr.Martin J. McKeown and me. The thesis is based on a collection of manuscriptsthat have been accepted or submitted for publication in book chapter, internationalpeer-reviewed journals and conferences.Chapter 1 and Chapter 2 are based on the following manuscripts:• Aiping Liu, Junning Li , Z. Jane Wang, and Martin J. McKeown , “An FDR-controlled, exploratory group modeling for assessing brain connectivity”,9th IEEE International Symposium on Biomedical Imaging (ISBI), Pages558-561, May 2012.• Aiping Liu, Junning Li, Z. Jane Wang, and Martin J. McKeown, “A Com-putationally Efficient, Exploratory Approach to Brain Connectivity Incorpo-rating False Discovery Rate Control, A Priori Knowledge, and Group Infer-ence”, Computational and Mathematical Methods in Medicine, vol. 2012,14 pages, 2012.• Aiping Liu, Junning Li, Z. Jane Wang and Martin J. McKeown, “Brain con-nectivity assessed with functional MRI”, Medical Imaging Technology andApplications, CRC Press, 2013.In the above manuscripts, the development of analysis techniques was jointly con-ducted by the author and Dr. Junning Li under the supervision of Dr. Z. Jane Wangand Dr. Martin J. McKeown. The experimental data were provided by Dr. Martin J.McKeown of PPRC (Clinical Research Ethics Board, H04-70177). I was responsi-ble for technical literature survey, simulation, real data application and manuscriptsivwriting. The neurological interpretation of fMRI application was written with theguidance of Dr. Martin J. McKeown.Chapter 3 is based on the following manuscript:• Aiping Liu, Xiaohui Chen, Z. Jane Wang, Qi Xu, Silke Appel-Cresswell andMartin J. McKeown, “A Genetically Informed, Group fMRI ConnectivityModeling Approach: Application to Schizophrenia”, IEEE Transactions onBiomedical Engineering, vol. 61, no.3, pp.946-956, March 2014.I was responsible for the development of algorithm, numerical simulation and realdata application. Asymptotic properties of the algorithm was analyzed by Dr. Xi-aohui Chen. The genetic and fMRI data were collected, pre-processed and de-scribed by Dr. Qi Xu. I prepared the manuscript with suggestions and subsequentrevisions from Dr. Silke Appel-Cresswell, Dr. Z. Jane Wang and Dr. Martin J.McKeown.Chapter 4 is based on the following manuscripts:• Aiping Liu, Xun Chen, Z. Jane Wang and Martin J. McKeown, “Time vary-ing brain connectivity modeling using fMRI signals”, IEEE InternationalConference on Acoustics, Speech, and Signal Processing (ICASSP), Pages2089-2093, May 2014.• Aiping Liu, Xun Chen, Martin J. McKeown and Z. Jane Wang, “ A StickyWeighted Regression Model for Time-Varying Resting State Brain Connec-tivity Estimation”, IEEE Transactions on Biomedical Engineering, vol. 62,no. 2, pp. 501-510, Feb. 2015.In the above manuscripts, I was responsible for the model design, numerical sim-ulation and real fMRI application. The experimental data were provided by Dr.Martin J. McKeown of PPRC (Clinical Research Ethics Board, H04-70177). Idrafted the manuscript with subsequent editorial input from Dr. Xun Chen, Dr. Z.Jane Wang and Dr. Martin J. McKeown.A version of Chapter 5 has been submitted for publication:• Aiping Liu, Xun Chen, Xiaojuan Dan, Martin J. McKeown and Z. JaneWang, “ A Combined Static and Dynamic Model for Resting State fMRIvBrain Connectivity Networks: Application to Parkinson’s Disease”, submit-ted, 2015.I was responsible for the algorithm development, numerical simulation, real dataapplication and paper writing. The data description was originally written by Dr.Martin J. McKeown (Clinical Research Ethics Board, H04-70177). Coauthors haveprovided editorial input for the manuscript.viTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiGlossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xivAcknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi1 Introduction and Overview . . . . . . . . . . . . . . . . . . . . . . . 11.1 Functional Magnetic Resonance Imaging and Brain Activity Study 21.2 Brain Connectivity Network Modeling . . . . . . . . . . . . . . . 61.2.1 Group Level Brain Connectivity Network Modeling . . . 101.2.2 Time Varying Brain Connectivity Network Modeling . . . 121.2.3 Brain Connectivity Network Interpretation . . . . . . . . 141.3 Research Objectives and Methodologies . . . . . . . . . . . . . . 151.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 False Discovery Rate Controlled, Prior Knowledge Incorporated GroupBrain Connectivity Modeling . . . . . . . . . . . . . . . . . . . . . . 192.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19vii2.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3 Real Application . . . . . . . . . . . . . . . . . . . . . . . . . . 252.4 Conclusion and Discussion . . . . . . . . . . . . . . . . . . . . . 273 A Genetically-informed, Group Brain Connectivity Modeling Frame-work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.2.1 Overlapped Group Fused Model . . . . . . . . . . . . . . 333.2.2 Model Selection and Degree of Freedom . . . . . . . . . 363.2.3 Implementation of Brain Connectivity Modeling and Sta-tistical Inference . . . . . . . . . . . . . . . . . . . . . . 393.2.4 Graph Theoretical Analysis . . . . . . . . . . . . . . . . 403.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413.4 Real Application in Schizophrenia . . . . . . . . . . . . . . . . . 443.4.1 Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.4.2 Selection and Genotyping of SNPs . . . . . . . . . . . . . 443.4.3 fMRI Data . . . . . . . . . . . . . . . . . . . . . . . . . 463.4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 463.5 Conclusion and Discussion . . . . . . . . . . . . . . . . . . . . . 513.6 Asymptotic Properties . . . . . . . . . . . . . . . . . . . . . . . 534 A Sticky Weighted Time Varying Model for Resting State Brain Con-nectivity Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 554.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 584.2.1 Sticky Weighted Time Varying Model . . . . . . . . . . . 584.2.2 Model Selection . . . . . . . . . . . . . . . . . . . . . . 604.2.3 Statistical Analysis . . . . . . . . . . . . . . . . . . . . . 614.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 624.4 Real Application . . . . . . . . . . . . . . . . . . . . . . . . . . 674.4.1 Subjects and fMRI Resting State Data Set . . . . . . . . . 674.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 69viii4.5 Conclusion and Discussion . . . . . . . . . . . . . . . . . . . . . 725 A Combined Static and Dynamic Model for Resting State Brain Con-nectivity Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 795.2 Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815.2.1 Combined Static and Dynamic Brain Connectivity Net-work Estimation . . . . . . . . . . . . . . . . . . . . . . 815.2.2 Eigenconnectivity Network Extraction . . . . . . . . . . . 855.2.3 Dynamic Feature Extraction . . . . . . . . . . . . . . . . 865.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875.4 Real Application . . . . . . . . . . . . . . . . . . . . . . . . . . 905.4.1 Subjects and fMRI Resting State Data Set . . . . . . . . . 905.4.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 925.5 Conclusion and Discussion . . . . . . . . . . . . . . . . . . . . . 966 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . 996.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 996.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1016.2.1 Brain Connectivity Transition Patterns Estimation . . . . . 1016.2.2 Large Scale Brain Connectivity Network Modeling . . . . 1026.2.3 Connectivity Based Brain Voxel Selection . . . . . . . . . 1036.2.4 Application to Parkinson’s Disease Studies . . . . . . . . 104Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106ixList of TablesTable 2.1 The name and abbreviation of 12 selected brain ROIs. . . . . . 26Table 3.1 Implementation of overlapped group fused model. . . . . . . . 37Table 3.2 Implementation of model selection. . . . . . . . . . . . . . . . 39Table 3.3 The index and name of 52 selected brain ROIs. ’L’ representsthe brain left side and ’R’ represents the brain right side. . . . . 45Table 4.1 Implementation of sticky weighted time varying model. . . . . 61Table 4.2 The index and name of 48 selected brain ROIs. ’L’ representsthe brain left side and ’R’ represents the brain right side. . . . . 68Table 5.1 Implementation of combined static and dynamic model for timevarying coefficients estimation. . . . . . . . . . . . . . . . . . 85Table 5.2 The index and name of 54 selected brain ROIs. ’L’ representsthe brain left side and ’R’ represents the brain right side. . . . . 91xList of FiguresFigure 1.1 BOLD signal mechanism for fMRI. . . . . . . . . . . . . . . 2Figure 1.2 Example of fMRI block design experiment. . . . . . . . . . . 3Figure 1.3 Example of modeling the functional brain connectivity network. 5Figure 1.4 Illustration of pairwise dependence and conditional dependence. 8Figure 1.5 The overview of challenges and objectives of this thesis work. 16Figure 2.1 Learned brain connectivity for normal and patient groups. . . 28Figure 3.1 The proposed framework for genetically-informed group brainconnectivity modeling. . . . . . . . . . . . . . . . . . . . . . 32Figure 3.2 The comparison of estimated degree of freedom and true de-gree of freedom. . . . . . . . . . . . . . . . . . . . . . . . . 38Figure 3.3 One example of the heterogeneous overlapped group structureused in the simulation. . . . . . . . . . . . . . . . . . . . . . 42Figure 3.4 Simulation results: Simulation is designed to recover a hetero-geneous group network based on measures such as Type I errorrate, detection power, FDR and F1 score as the number of datapoints increase. . . . . . . . . . . . . . . . . . . . . . . . . . 43Figure 3.5 The comparisons between normal control and Schizophreniagroups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Figure 3.6 The comparisons of node degree at Left Putamen region in dif-ferent conditions. . . . . . . . . . . . . . . . . . . . . . . . 48Figure 3.7 Examples of brain connectivity networks in the group modeling. 49xiFigure 3.8 The comparisons between two groups with rs2391191 AG andAA genotypes. . . . . . . . . . . . . . . . . . . . . . . . . . 50Figure 4.1 Results for the first simulation. . . . . . . . . . . . . . . . . . 63Figure 4.2 Results for the second simulation. . . . . . . . . . . . . . . . 64Figure 4.3 Results for the third simulation. . . . . . . . . . . . . . . . . 65Figure 4.4 The comparisons of F1 scores. . . . . . . . . . . . . . . . . . 66Figure 4.5 The comparisons of density and network variations with dif-ferent tuning parameters. . . . . . . . . . . . . . . . . . . . 70Figure 4.6 Time varying brain connectivity networks learned with fixedparameters for one typical control subject at different time points. 74Figure 4.7 Time varying brain connectivity networks learned with fixedparameters for one typical PD subject at different time points. 75Figure 4.8 The comparison of network variation between the normal andPD groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Figure 4.9 The comparison of averaged time period between the normaland PD groups. . . . . . . . . . . . . . . . . . . . . . . . . . 76Figure 4.10 The comparison of switching ratio between the normal and PDgroup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Figure 4.11 Connections that consistently appear in at least one time pointin all subjects in (a) the control group and (b) the PD group. . 78Figure 5.1 Simulation example for CSDM. . . . . . . . . . . . . . . . . 88Figure 5.2 The comparison of the L2-loss of the estimated coefficients for(a). simulation 1 and (b). simulation 2. . . . . . . . . . . . . . 89Figure 5.3 Examples of static eigenconnectivities (fixed density). . . . . 92Figure 5.4 Examples of dynamic eigenconnectivities (fixed density). . . . 93Figure 5.5 The comparisons of static eigenconnectivity networks and av-eraged contributions of control and PD groups to the staticeigenconnectivity networks. . . . . . . . . . . . . . . . . . . 94Figure 5.6 The comparisons of dynamic eigenconnectivity networks andaveraged contributions of control and PD groups to the dy-namic eigenconnectivity networks. . . . . . . . . . . . . . . . 95xiiFigure 5.7 The comparison of network variations between control, PDonand PDoff groups. . . . . . . . . . . . . . . . . . . . . . . . 96xiiiGlossaryBIC Bayesian information criterionBOLD Blood-oxygen-level dependenceCSDM Combined static and dynamic modelCT Computed tomographyDAOA D-amino acid oxidase activatorDCM Dynamic causal modelingDF Degree of freedomDM Dirty modelEEG ElectroencephalographyFDR False discovery rateGC Granger causalityGIMME Group iterative multiple model estimationGLM General Linear ModelFMRI Functional magnetic resonance imagingICA Independent component analysisLASSO Least absolute selection and shrinkage operatorxivMAR Multivariate autoregressive modelMEG MagnetoencephalogramOGFM Overlapped group fused modelPCA Principal component analysisPD Parkinson’s diseasePET Positron emission tomographyROI Region of interestSEM Structural equation modelingSNPS Single nucleotide polymorphismsSVD Singular value decompositionSWTV Sticky weighted time varyingWTV Weighted time varyingxvAcknowledgmentsIt’s a happy journey walking towards to my PhD degree, with support and encour-agement from numerous people. Needless to say, I thank all of them. In particular,I want to take this opportunity to acknowledge some of them.First and foremost, I would like to express my great appreciation to my super-visors, Dr. Z. Jane Wang and Dr. Martin J. McKeown, who are the best supervisorsI could ever expect. Thanks for their constant academic and personal support overthe years. Without their enlightening guidance, this thesis would never have beenwritten. Many thanks go to Dr. Junning Li and Dr. Xiaohui Chen for providing mea lot of valuable discussions and assistance in my research.I want to thank my thesis examination committee members for their time andefforts. I would also like to acknowledge Dr. Silke Appel-Cresswell, Dr. NazaninBaradaran, Suejin Lin and Sun Nee Tan from Pacific Parkinson’s Research Centrefor providing me with the data and helpful discussions regarding the Parkinson’sDisease studies.I am thankful to all my labmates, colleges and coauthors, Dr. Joyce Chiang, Dr.Xudong Lv, Dr. Chen He, Dr. Zhenyu Guo, Liang Zou, Huan Qi, Jiannan Zhengand Yiming Zhang for creating supportive research environment and for the funwe have had together. I also would like to thank my friends, Haolu Zhang, ShuminWang, Dr. Di Xu, Dr. Qiang Tang, Dr. Haoming Li, Dr. Erkai Liu, Dr. MengzheShen, Yifeng Zhou, and many others from the UBC, for the friendship and moralsupport they provided.I am deeply indebted to my parents for their unconditional love and constantencouragement. A special thank goes to my husband, Dr. Xun Chen for his un-derstanding, support and love. Dr. Xun Chen is my friend, collaborator and greatxvipartner. Ryan Y. Chen, my son, thank you for your sweet smile and making me amultitask learner.This work was supported by the Four Year Doctoral Fellowship (4YF) programof UBC, Natural Sciences and Engineering Research Council (NSERC) of Canadaand Canadian Institutes of Health Research (CIHR) Grants.xviiChapter 1Introduction and OverviewHuman brain is considered as one of the most complex systems and attracts manyefforts in studying its structures and functions, where neuroimaging technologiesare shown to be the powerful tools. Neuroimaging technologies such as Elec-troencephalography (EEG), Magnetoencephalogram (MEG), Computed tomogra-phy (CT), Positron emission tomography (PET) and Functional magnetic resonanceimaging (FMRI), are becoming prevalent in recent neuroscience studies as they arecapable to non-invasively examine brain activities in vivo. In particular, fMRI,which measures brain functions at better spatial resolution than other functionalneuroimaging modalities, has been widely employed in biomedical applications,achieving remarkable progress in understanding our brain.The advances in neuroimaging technologies have also encouraged the devel-opment of suitable data modeling approaches. The conventional methods focus onfunctional segregation studies which try to identify the functional specializationof particular brain regions. While most cognitive states involve the coherent ac-tivation of several functionally specialized regions, investigating brain activity inisolated brain areas may be insufficient. As a result, an alternative organizationalprincipal, functional integration, has been introduced to characterize brain activity.This has led to the emergence of the concept of brain connectivity which estimatesthe interaction patterns of a set of distant brain regions.Brain connectivity has provided a system-level view of how the brain worksand facilitated the exploration of brain functions in normal states. In addition, it1has expanded our insights into related brain diseases. For instance, neurologicaldisorders such as Parkinson’s disease (PD) have enormous impact on the wholepopulation and the neuorimaging studies benefit the detection of these diseases intheir early stage [40, 58]. The discovery of altered connectivity patterns is promis-ing in assisting the disease diagnosis, severity detection and medication evaluation.In this thesis, we are thus particularly interested in the estimation of brain connec-tivity. A set of novel network modeling approaches are developed with the ultimategoal to investigate the disease induced effects on brain connectivity patterns.In the remainder of this chapter, we first introduce the background on fMRIand brain activity study in Section 1.1. Section 1.2 reviews a list of popular brainconnectivity network modeling approaches, and discusses on the network interpre-tations briefly. Section 1.3 introduces the research objectives and methodologies.The thesis outline is finally presented in Section Functional Magnetic Resonance Imaging and BrainActivity StudyfMRI is one of the most widely used neuroimaging technologies which measuresrelative changes in deoxygenated haemoglobin in the form of Blood-oxygen-leveldependence (BOLD) signals as a result of ongoing brain activity [108]. When abrain region becomes active, more oxygen contained in increasing blood flow isdelivered to the neurons by haemoglobin which could exhibit different magneticproperties during oxygenation to deoxygenation. The mechanism of BOLD signalis shown in Fig. 1.1. It is an indirect marker of neural activity, as it is based onfocal blood flow modulated by local brain metabolism.Figure 1.1: BOLD signal mechanism for fMRI.The majority of fMRI analyses to date have examined changes in BOLD am-plitude as a result of external stimulus, which identify the functional specialization2of a particular brain region (functional segregation).Figure 1.2: Example of fMRI block design experiment. A subject performsa certain task, e.g. bulb squeezing, when BOLD signals are collected.Alternating between task and rest generates the images required for in-ferring brain activity. The voxel time courses are then extracted forsubsequent analysis such as activation detection. The figure is adaptedfrom [105].The exact amplitude of the BOLD signal is not directly comparable across sub-jects since BOLD fMRI is a contrast imaging technique (i.e. unitless), but not aquantitative imaging technique. It is standard to determine the relative differencesin BOLD signal amplitude across two tasks (e.g. bulb squeezing vs rest). Tradi-tionally, this has been done in a block design, where the subjects may, for example,3squeeze a bulb for 20-30s followed by 20-30s of rest, and the cycle is repeated (Fig.1.2). This alternating of experimental and control tasks will tend to make analysismethods more robust against any erroneous interpretations based on non-neuronalslow drifts in the signal and/or fatigue effects that would bias interpretations if theexperimental task was done only at the beginning or end of the run. Block-relateddesigns generally assume that any hemodynamic response to neuronal activity issaturated by rapidly and repeatedly performing the same task within a block.An alternate approach is to assess the BOLD response to a single stimulus,such as a simple motor movement. This has the advantage of comparing stimulithat might have the same loci of activation but different amplitudes of neuronal(and subsequent hemodynamic) response, but has the disadvantage of significantlyprolonging scanning time, as the hemodynamic response must decay sufficientlybefore the next stimulus can be presented.To infer task-related activation or specialization, hypothesis-driven methodshave been widely adopted to examine changes in BOLD amplitude, like the stan-dard General Linear Model (GLM) [48].The spatial patterns of the BOLD signals could also be altered by task-relatedactivity. For instance, spatial or “3D texture descriptors” such as 3D Moment In-variants (3DMI) that are invariant to the exact coordinate system, can be used toexamine the effects of task-related changes in fMRI [106].While most cognitive states involve the coherent activation of several func-tionally specialized regions, the concept of brain connectivity has emerged whichestimates the interaction patterns between discrete brain regions [13]. Brain con-nectivity is a promising way to investigate the functional coordination and has pro-vided a system-level view of how the brain works.A more recent type of paradigm, the so-called “resting-state” study, wherebyindividuals are instructed to simply rest quietly with their eyes closed and remainawake, is well suited for the brain connectivity studies. In this condition, spatiallywidespread, unprompted activity not attributable to specific external stimuli canbe observed. Statistical analyses on the spontaneous fluctuations in the BOLDsignal can then be performed to detect the intrinsic activities of human brain [46].Resting-state fMRI has emerged as a powerful tool for discovery science which iscapable of generating detailed maps of complex neural systems [14].4Figure 1.3: Example of modeling the functional brain connectivity networkat the ROI level. The functional connectivity network can be repre-sented as a graph with each node representing the brain ROI and eachedge representing the relationship between two ROIs. (A). A collectionof brain ROIs are firstly selected. (B). The time course of each ROI isextracted for pairwise or multivariate analysis. (C). The brain connec-tivity network is estimated. The figure is adapted from [146].To reveal the whole brain organizations, brain connectivity studies, which weare interested in, play significant roles. A typical example of brain connectivitynetwork modeling at the Region of interest (ROI) level is shown in Fig. 1.3. Theinteractions between brain regions have been increasingly recognized as being im-portant for understanding normal brain functions and the pathophysiology of many5related diseases. It has been suggested that some disorders such as Schizophreniaand PD are related with the dysfunctions of connectivity networks [135]. Comparedwith traditional methods for functional specialization analysis, brain connectivitynetwork modeling allows the exploration of the cooperation between multiple brainregions and the extraction of more informative features of the neural systems.1.2 Brain Connectivity Network ModelingThe study of brain connectivity has enhanced our understanding of the underly-ing brain function in both normal and disease states, in addition to the traditionalapproaches focusing on regional activity detection.Brain connectivity patterns have been inferred based on bivariate analysis suchas correlation threshold [19], frequency-based coherence analysis [125], mutualinformation [128], Granger causality (GC) derived from bivariate autoregressivemodels [54] and so on. Multivariate models, including Multivariate autoregressivemodel (MAR) [62], Structural equation modeling (SEM) [97] and Dynamic causalmodeling (DCM) [47] have also been proposed to assess brain connectivity. Othercommonly-used approaches include linear decomposition methods such as Inde-pendent component analysis (ICA) [99, 100], sparse induced modeling [24, 149]and Bayesian network models [82, 117]. There are different (but not mutually ex-clusive) ways in which all these proposed brain connectivity modeling approachescan be categorized: exploratory vs confirmatory, linear vs nonlinear, directionalconnectivity vs bidirectional and voxel level vs ROI level modeling.The most straightforward approach, the correlation threshold method [19] es-timates how strongly two brain regions interact with each other by calculating thecorrelation coefficient between their activities. If the correlation coefficient is suf-ficiently high that it is not possible only coming from the randomness, the tworegions are considered associated with each other. Correlation threshold is statisti-cally rigorous. However, standard pairwise correlation cannot distinguish betweendirect and indirect interactions (whether two components interact directly or indi-rectly through a third component).Instead of simply making inference of the co-varied brain regions, partial cor-relation can be employed to estimate if one brain region has direct influence over6another [96], as it measures the normalized correlation with the effect of all othervariables being removed. The application of partial correlation in inferring therelationship between two variables is based on the conditional independence test.The definition of conditional independence is as follows: X and Y are conditionalindependent given Z if and only if P(XY |Z) = P(X |Z)P(Y |Z). It is similar to thepair-wise independence definition P(XY ) = P(X)P(Y ), but conditional on a thirdvariable Z. Note that pairwise independence does not imply conditional indepen-dence, and vice versa (Fig. 1.4). For instance, the activities of two brain regions Aand B are commonly driven by that of a third region C, then the activities of A andB maybe correlated in pairwise fashion. But if the influence from C is removed,their activities will become independent, as shown in Fig. 1.4 (b). On the otherhand, if the activities of two brain regions are correlated even after all possible in-fluence from other regions are removed (as shown in Fig. 1.4 (c)), then very likelythere is a direct connection between them (i.e., the two regions are conditionallydependent). Therefore, the conditional dependence implies that two brain regionsare directly connected. It is a key concept in multivariate analyses such as graph-ical modeling, where two nodes are connected if and only if the correspondingvariables are not conditionally independent.Different from the pairwise analysis between two variables discussed before,linear decomposition methods, such as Principal component analysis (PCA) or ICA[100] can be used to assess which voxels have a tendency to coactive. They are thedata driven approaches which are suitable in the applications when the models ofbrain activity are not available. For instance, ICA decomposes BOLD patterns intospatially independent maps and their associated time courses. It was a significantshift from the traditional hypothesis-based approach for fMRI analysis when firstproposed [100]. Because no time course of activation needs to be specified a priori,it is ideally suited to assess resting-state fMRI data [38, 43] or in situations wherethe anticipated activation patterns may deviate from normal. Thus ICA analysis offMRI has been widely used to study clinical populations, e.g. Alzheimer’s disease[56], depression [57], schizophrenia [69], mild cognitive impairments [116] andnon-communicative brain damaged patients [147].Similar to the linear decomposition methods, clustering techniques, such as theself-organizing map (SOM) [45, 113], k-means clustering [9], hierarchical clus-7Figure 1.4: Illustration of pairwise dependence and conditional dependence.tering [34] and graph clustering [64], are also data-driven approaches to exploreunknown interactions between brain regions. They are based on the assumptionthat if the time courses of voxels and/or ROIs tend to cluster together, they likelyhave interactions between them. Clustering is usually implemented with fast andheuristic algorithms and thus is suitable for large-scale problems where it is diffi-cult to perform rigorous statistical analysis. However, the data-driven feature alsobrings disadvantages, as certain algorithms may either fall in local optimal solu-tions or their convergence cannot be proved. Statistical criteria such as specificityand sensitivity generally cannot be theoretically analyzed for clustering methods.It is noticed that aforementioned approaches can not determine the directionof the connections. To estimate the direction, one popular approach employedis Granger Causality (GC) [54]. It is based on the statistical hypothesis testingfor determining whether one time series can be used to at least partially predictanother. Another way is using Patel’s conditional dependence measure which esti-mates the connectivity between two variables by measuring the imbalance of condi-8tional probability between them [112]. These two methods are usually consideredas confirmatory methods due to the requirement of prior knowledge on the networkstructures.Multivariate autoregression (MAR) model [62], structural equation model (SEM)[97] and dynamic causal model (DCM) [47] are popular multivariate regressionmodels proposed to estimate brain connectivity. MAR model focuses on the laggedinteractions and incorporates the covariance as well as temporal information acrossthe samples. It represents one sample of a time series as the weighted sum of itsprevious samples,Yt =P∑p=1ApYt−p+ ep (1.1)where Yt is the K-dimensional vector denoting BOLD signal values of K ROIs attime t, Ap is the MAR coefficient matrix at time lag p, and et is the noise term.Different from the MAR model that infers the lagged information, SEM estimatesthe simultaneous interactions between brain ROIs,Yt = MYt + et (1.2)where M represents the connection strength matrix. DCM is different from themby accommodating the nonlinear and dynamic activities between brain regions.It models neural activities as hidden variables. Multivariate regression modelsare considered as statistically rigorous, flexible, and supported by many well-developed algorithms. However, a major drawback of these models is that thecomputational cost grows exponentially with the number of ROIs. This typicallyrestricts their use to confirmatory studies examining a few ROIs.fMRI has relatively few time points and the number of ROIs may be large.Modeling the brain connectivity using fMRI signals is a difficult statistical infer-ence problem. The sparsity assumption thus has been made on the connectivitynetworks which favors both the computational efficiency and biological interpreta-tions. Least absolute selection and shrinkage operator (LASSO) based approachescombine computational efficiency with the ability to deal with high dimensionality,and hence such methods including the robust LASSO and the sparse inverse covari-9ance estimation have been proposed [24, 66]. Sparse dictionary learning techniqueshave also been developed to assess the functional connectivity [148, 150]. For in-stance, by fitting a model to all the variables, the graphical LASSO estimates asparse network, whereby ROIs are represented as vertices and variable-wise rela-tionships are represented as edges [24].Graphical models are suitable for modeling the brain connectivity as theirgraphical nature assists in the biological interpretation of the connectivity patterns.The Linear, Non-Gaussian, Acyclic model (LiNGAM) is a causal graphical modelthat assumes the variables have non-Gaussian distributions of non-zero variances,and identify the brain connectivity structure as a directed acyclic graph [130]. TheBayesian network models which encode the conditional dependence/independenceinto the graph, are the most popular graphical models proposed for studying theinteractions between brain regions [82, 118]. They are capable to handle relativelylarge number of brain regions and provide more rigorous model selection proce-dure [104].It’s worth noting that a lot of novel approaches have been proposed, aimingto enhance the network modeling with different assumptions made. Each methodhas its own advantages and limitations. Based on different scenarios, specific ap-proaches may be designed to meet the demands in their applications [131]. How-ever, due to the lack of underlying truth of brain interaction mechanisms, revealingbrain connectivity patterns is still on the exploratory stage with the goal to uncoverthe large scale brain connectivity maps.1.2.1 Group Level Brain Connectivity Network ModelingRather than focusing on an individual subject, the biomedical researches typicallyinvolve a group of subjects in order to make inference about a population. Inaddition, as the number of fMRI data sets increasing, it offers the opportunitiesand also becomes necessary to investigate the brain connectivity networks at thegroup level.However, group level approaches for modeling brain connectivity need to han-dle not only the variances and correlations across subjects, but also the fact thatthe exact structures of brain connectivity may vary across individuals. The large10inter-subject variability poses challenges on the fMRI data processing and the un-derlying differences existing may lead to draw the false connections.Several methods have been developed to infer group connectivity networks inneuroimaging. As one of the most popular method, group ICA extends the singlesubject ICA algorithm to the multi-subject setting by estimating a set of groupcomponents [16]. Based on multi-subject ICA, dual regression has been proposedto identify between-subject differences in resting state brain connectivity networks[10].Bayesian model selection [136] handles inter-subject variability and error ratecontrol. However, its current proposed implementation does not scale well, makingit more suitable for confirmatory, rather than an exploratory research. Varoquauxet al. [149] propose a data driven method to estimate large-scale brain connectivityusing Gaussian modeling and deals with the variability between subjects by us-ing optimal regularization schemes. Ramsey et al. [119] describe and evaluate acombination of a multi-subject search algorithm and the orientation algorithm forgroup level brain connectivity inference. Extending the structural equation model-ing approach, Group iterative multiple model estimation (GIMME) adopts a forwardand then backward search algorithm to eliminate the insignificant connections, andthus estimates the brain connectivity networks [50].Group studies are often closely linked to exploratory analysis. In contrast toconfirmatory studies that usually involve verification of just a few pre-selectedmodels, exploratory studies must search through a huge number of possible mod-els to find one or a few models that are best supported by data. Thus, efficiencyof the search strategy becomes important, especially since the number of possiblenetwork structures increases super-exponentially to the number of brain ROIs in-volved. For example, with just seven ROIs, there are more than a billion possiblenetwork structures.Accuracy is another important criterion for an exploratory method. In bio-logical scenarios, the goal is not just to adequately model the overall multivariatetime series derived from multiple ROIs, but also that the structure of the model,from which biological interpretations are made, is accurately depicted. Colloqui-ally, accuracy can be inferred from answering the questions, “How many of theconnections in the model inferred from data are actually true”, “how many true11connections can be detected by the model?” and “how many non-existing connec-tions in the model are falsely reported?” Therefore, error control is a crucial pointin the design of reliable methods for discovering group level connectivity.A theoretically elegant and feasible method for the group-level exploratoryanalysis of brain connectivity must include both efficiency and accuracy of thelearned networks.Moreover, current group analyses attempt to summarize information into onegroup by treating all the subjects equally. However, in the neuroimaging studieswith the subjects in disease states, simply learning the common group structuremay neglect the heterogeneity in the population. Taken the similarity as well asvariability into consideration, the subjects could be categorized into a hierarchicalstructure according to their clinical information and the group connectivity mod-eling should be able to characterize the diversity among subjects in a hierarchicalmanner.Incorporating a priori knowledge into brain connectivity modeling allows formore flexibility in estimation the connectivities at the group level, benefits the bi-ological interpretations and potentially leads to greater sensitivity in accuratelydiscovering the true brain connectivity. Since the neuroscientific interpretation islargely based on the pattern of inferred connections, compared with barely creatinga model that fits the overall data well, imposing the prior knowledge into the con-nectivity models benefit the final biological interpretations. Incorporating a prioriknowledge into connectivity modeling also provides a way for the combinationalmodality studies.1.2.2 Time Varying Brain Connectivity Network ModelingThe dynamics of brain connectivity networks are particularly important as they areassociated with a variety of neurodisorders such as Schizophrenia [124], multiplesclerosis [80], PD [90] and post-traumatic stress disorder [85]. For instance, alteredcontributions of brain connectivity dynamic patterns have been reported in subjectswith multiple sclerosis [80]. The network variations of subjects with Parkinson’sDisease is decreasing compared with that of control subjects, an observation thatmay be related with the cognitive rigidity, i.e. difficulty between switching tasks,12that is frequently observed in PD [90].A few studies have investigated brain connectivity dynamics [7, 42, 60, 77,124] and have demonstrated that connectivity can be mediated by learning and/ortask performance [7, 42, 124]. In addition to task-related connectivity changes,various groups have assessed dynamic changes during resting state fMRI.Frequently, the assumption is made that the brain networks change slowly andsmoothly with time, and a sliding window based approach is used. By specify-ing a fixed window length and shifting the window by a given number of samplepoints, different network learning methods, such as correlation [61, 67, 140], co-variance [1, 49] and ICA [72, 123], have been applied to estimate time-dependentinteractions at each time window.A key problem with sliding window approaches is that it is critical and difficultto determine the appropriate window length. Too long a window will reduce tem-poral variability and miss possibly important brain state changes, while too short awindow may suffer from large fluctuations as the small number of samples resultsin unstable estimates [79]. Some features, such as patterns of co-varying connec-tions, may be largely invariant to different window lengths [80]. To circumventthe issue associated with window length, time-frequency based approaches, suchas wavelet transform based coherence analysis, have been proposed to estimatedynamic brain connectivity in the time-frequency plane [21].An alternative to assuming smoothly changing connectivity is that the brainstates are relatively stable between a few critical time points [92]. These criti-cal time points thus can be used to segment the entire brain signals into quasi-stationary sections for the purposes of brain connectivity estimation [37, 86, 92].Nevertheless, change point detection models can be particularly sensitive to arti-facts contained in the data.Lagged interaction based approaches, such as multivariate auto-regressive mod-els [54] have also been employed to study the brain dynamics which examinebrain interactions simultaneously and over adjacent time steps. However, sincethe lagged interactions themselves are assumed to be time-invariant, they techni-cally can be categorized as stationary models. State space model based approaches,by combining lagged interaction and filtering theory, estimate non-stationary brainconnectivity at each time point [74].13Although more and more studies have been conducted to investigate time-dependent brain networks, there are still some challenges. For instance, there isno consensus on the underlying brain connectivity patterns. Whether the braincoupling changes smoothly or suddenly, is still unclear. What is the time scale ofthe brain connectivity networks, is still in debate. Therefore, further investigationof dynamic brain connectivity is still in demand. They may ultimately provide usdeeper insights into the flexibility and adaption of underlying brain functions.1.2.3 Brain Connectivity Network InterpretationBased on the inferred brain connectivity networks, the extraction of the useful in-formation for evaluating the network properties are of great importance. However,the interpretations of the estimated brain connectivity networks are challenging dueto their spatial complexity.Several approaches have been proposed to summarize and represent the in-ferred networks such as the graph theoretical measures which was originated fromgraph theory [15]. With solid theoretical bases, it serves as the feature extractiontool for better describing our brain at the network level, and interesting resultshave been found by adopting graph theoretical analysis. Small-worldness topologywhich describes brain networks with high efficiency but low wiring cost has beenidentified in the normal brains [6]. It demonstrates that brain is an efficient infor-mation processing system. Some other graph measures such as motif, clusteringand modularity have also been investigated, and provide significant insights intothe brain organizations [15]. In addition, graph theoretical analysis also providesinsightful information when investigating brain connectivity related impairmentsand disorders such as Parkinsons Disease, Schizophrenia and etc. [59].Time varying connectivity patterns are particularly difficult in interpretationdue to its additional temporal complexity. More measures may be required to rep-resent the full potential of temporal and spatial information. To facilitate suchinterpretations, the eigenconnectivity networks have been proposed based on theconcatenated dynamic connectivity networks using the decomposition approach[80]. They serve as the representative spatial connectivity patterns, which can thenbe compared across groups. The clustering approach as described in [1] is an alter-14native popular method to identify the temporal functional connectivity states basedon the estimated dynamic connectivity networks.To summarize, the brain connectivity network interpretation approaches playvital roles in facilitating the understanding of estimated brain networks. Compre-hensively extraction of the useful information from the inference results have greatsignificance of the real biomedical studies. Further exploration of network anal-ysis approaches should be emphasized in order to promote the understanding ofinferred brain connectivity.1.3 Research Objectives and MethodologiesThe goal of this thesis is to develop a set of novel brain connectivity modelingapproaches which are able to cope with several challenges present in real brainconnectivity studies, and hence further investigate the disease induced changes onbrain interaction patterns.Brain connectivity has not only provided great potential in understanding thebrain functioning in normal state, but also extended the insights into related neuro-logical disorders. However, due to the lack of underlying truth of brain interactionmechanisms and complexity of brain itself, modeling the brain connectivity is adifficult topic with some challenges present including the group level inference,hard biological interpretation, poor signal to noise ratio, temporal dynamics, accu-racy control, prior knowledge incorporation, multimodality, efficiency and so on.Motivated by our particular applications, we are interested in addressing thefollowing concerns and challenges. First, group level inference is one of the mostimportant issues as biomedical studies typically involve a group of subjects in or-der to make inference about a population, rather than focusing on a single sub-ject. However, it’s challenging to deal with the large inter-subject variability whilemaking the inference with sufficient estimation accuracy. Error rate control thus isanother important criterion for an exploratory group level analysis method.To facilitate the accuracy, efficiency and results interpretation of group networkmodeling, the prior knowledge of the brain connectivity could be taken into con-sideration. In addition, the prior knowledge coming from other modalities mayfurther assist the group level brain connectivity estimation which is the data fusion15approach for the multimodality studies.Another challenge in brain network modeling is the time-dependent brain con-nectivity estimation. As our brain is inherently non-stationary, the dynamics ofbrain connectivity may thus provide us insights into adaption and reorganizationof brain connections. However, most existing approaches using fMRI signals arebased on the stationary assumption which may neglect the dynamics of brain in-teractions. The time-dependent brain connectivity estimation approaches are thusrequired to fill the gap.Figure 1.5: The overview of challenges and objectives of this thesis work.According to the aforementioned discussions, the main technical challengescould be summarized as generality, accuracy, efficiency, prior knowledge incor-poration, multimodality and temporal dynamics modeling. The objectives of thisthesis will focus on developing a set of brain connectivity modeling approaches thatare able to deal with aforementioned challenges, including group pattern extractionfrom a population, false discovery rate control, incorporation of prior knowledge,and dynamic brain connectivity estimation. Specifically, the main research contri-butions of this thesis are:• Propose a group level, error rate controlled, prior knowledge incorporated16network modeling approach by extending the original PCfdr algorithm.• Propose a framework for genetically-informed group brain connectivity mod-eling.• Develop a dynamic connectivity estimation approach which is able to re-cover the smoothly changing coefficients as well as suddenly changing con-nectivity structures.• Propose a combined temporal network modeling approach for simultane-ously static and dynamic brain connectivity estimation.Fig. 1.5 illustrates the challenges and objectives of this thesis work.1.4 Thesis OutlineThe outline of the remainder thesis is as follows:In Chapter 2, we extend the original PCfdr algorithm and propose a multi-subject, exploratory brain connectivity modeling approach that allows incorpora-tion of prior knowledge of connectivity and determination of the dominant brainconnectivity patterns among a group of subjects. The proposed approach is appliedto real fMRI data derived from subjects with Parkinson’s Disease on and off L-dopamedication and normal controls performing a motor task, and we find robust groupevidence of disease-related changes, compensatory changes and the normalizingeffect of L-dopa medication.In Chapter 3, we propose a framework for genetically-informed group brainconnectivity modeling. The proposed method is able to model the diversity of brainconnectivity in a population. Subjects are first stratified according to their geno-types, and then a group regularized regression model is employed for brain connec-tivity modeling. The simulations have been performed to test the performance. Theproposed method has then applied to resting state fMRI data from Schizophreniaand normal control subjects, and interesting results have been found. It representsa multi-modal analysis approach for incorporating genotypic variability into brainconnectivity analysis directly.Chapter 4 presents a time varying model to investigate the temporal dynam-ics of brain connectivity networks. The proposed method allows for estimation of17abrupt changes in network structure via a fused LASSO scheme, as well as recov-ery of time-varying networks with smoothly changing coefficients via a weightedregression technique. Simulations demonstrate that the proposed method yieldsimproved accuracy on estimating time-dependent connectivity patterns. The pro-posed method is then applied to real resting state fMRI data sets from Parkinson’sDisease and control subjects. Significantly different temporal and spatial patternsare found to be associated with PD.In Chapter 5, we further leverage assumptions on the brain dynamics, wherejoint inference of time-invariant connections as well as time-varying coupling pat-terns is proposed. We employ a multitask learning model followed by a least squareapproach to precisely estimate the connectivity coefficients. The proposed methodis applied to resting state fMRI data from PD and control subjects, and the eigen-connectivity networks are estimated to obtain the representative patterns of bothstatic and dynamic brain connectivity features.Finally, chapter 6 summarizes the contributions of this thesis, and discusses onthe future research directions.18Chapter 2False Discovery Rate Controlled,Prior Knowledge IncorporatedGroup Brain ConnectivityModeling2.1 IntroductionGraphical models, when applied to functional neuroimaging data, represent brainRegions of Interest (ROIs) as nodes, and the stochastic interactions between ROIsas edges. However, in most non-brain imaging graphical model applications, theprimary goal is to create a model that fits the overall multivariate data well, notnecessarily accurately reflect the particular connections between nodes. Yet in theapplications of graphical models to brain connectivity, the neuro-scientific inter-pretation is largely based on the pattern of connections inferred by the model. Thisplaces a premium on accurately determining the “inner workings” of the modelsuch as accounting for the error rate of the edges in the model.False discovery rate (FDR) [11, 137], defined as the expected ratio of spuriousconnections to all learned connections, has been suggested as a suitable error-ratecontrol criterion when inferring brain connectivity. Compared with traditional type19I and type II statistical error rates, the FDR is more informative in bioinformaticsand neuroimaging, since it is directly related with the uncertainty of the reportedpositive results. When selecting candidate genes for genetic research, for example,researchers may want 70% of selected genes to be truly associated with the disease,that is, an FDR of 30%.Naively controlling traditional type I and type II error rates at specified levelsmay not necessarily result in reasonable FDR rates, especially in the case of large,sparse networks. For example, consider an undirected network with 40 nodes, witheach node interacting, on average, with 3 other nodes, i.e. there are 60 edges in thenetwork. An algorithm with the realized type I error rate of 5% and the realizedpower of 90% (i.e. the realized type II error rate = 10%) will recover a networkwith 60×90% = 54 correct connections and [40×(40−1)/2−60]×5%= 36 falseconnections, which means that 36/(36+ 54) = 40% of the claimed connectionsactually would not exist in the true network! This example, while relatively trivial,demonstrates that the FDR may not be kept suitably low by simply controllingtraditional type I and type II error rates.Recent work in the machine learning field has started to investigate controllingthe FDR in network structures using a generic Bayesian approach and classicalFDR assessment [87]. This work was subsequently extended to look specificallyat graphical models where the FDR was assessed locally at each node [143].Li and Wang proposed a network learning method that allows asymptoticallycontrol of the FDR globally. They based their approach on the PC algorithm(named after Peter Spirtes and Clark Glymour), a computationally efficient andasymptotically reliable Bayesian network-learning algorithm. The PC algorithmassesses the (non)existence of an edge in a graph by determining the conditionaldependence/independence relationships between nodes [134]. However, differentfrom the original PC algorithm, which controls the type I error rate individuallyfor each edge during conditional independence testing, the Li and Wang algorithm,referred as the PCfdr algorithm, is capable of asymptotically controlling the FDRunder pre-specified levels [82]. The PCfdr algorithm does this by interpreting thelearning of a network as testing the existence of edges, and thus the FDR controlof edges becomes a multiple-testing problem, which has a strong theoretical basisand has been extensively studied by statisticians [82].20In this chapter, we introduce the extension to the original PCfdr algorithm andpropose a multi-subject brain connectivity modeling approach, allowing it to in-corporate prior knowledge and extending it to robustly assess the dominant brainconnectivity in a group of subjects. The major distinguishing feature of the pro-posed approach compared to these aforementioned approaches is that the currentdata-driven approach aims at controlling the FDR directly at the group level net-work. When applied the proposed approach to fMRI data derived from ten subjectswith Parkinson’s disease on and off L-dopa medication and ten normal controls per-forming a motor task, we found robust group evidence of disease-related changes,compensatory changes and the normalizing effect of L-dopa medication.2.2 MethodsThe initial version of Li’s method, called the PCfdr algorithm, was proved to be ca-pable of asymptotically controlling the FDR. It does this by interpreting the learn-ing of a network as testing the existence of edges, and thus the FDR control ofedges becomes a multiple-testing problem, which has a strong theoretical basisand has been extensively studied by statisticians [82].Here we present an extension of the PCfdr algorithm that can incorporate apriori knowledge, which was not specified in the original PCfdr algorithm. Wename the extension as the PC+fdr algorithm where the superscript “+” indicates thatit is an extension. It handles prior knowledge with two inputs: Emust , the set ofedges assumed to appear in the true graph, and Etest , the set of edges to be testedfrom the data. The original PCfdr algorithm can thus be regarded as a special caseof the extended algorithm, by setting Emust = /0 and Etest = {all possible edges}.The asymptotic performance of PC+fdr algorithm can refer to [88].Another extension is the group-level inference. Assessing group level activ-ity is done by considering a mixed-effect model (Step 7 of Algorithm 1). Whenalso incorporating a priori knowledge, the resulting algorithm is named the gPC+fdralgorithm.Suppose we have m subjects within a group. Then for subject i, the conditionalindependence between the activities of two brain regions a and b given other re-gions C can be measured by the partial correlation coefficient between Xa(i) and21Xb(i) given XC(i), denoted as rab|C(i). Here X• denotes variables associated witha vertex or a vertex set, and index i indicates that these variables are for subjecti. By definition, the partial correlation coefficient rab|C(i) is the correlation coef-ficient between the residuals of projecting Xa(i) and Xb(i) onto XC(i), and can beestimated by the sample correlation coefficient asrˆab|C(i) =Cov[Ya|C(i),Yb|C(i)]√Var[Ya|C(i)]Var[Yb|C(i)], (2.1)whereβa|C(i) = argminβ|Xa(i)−XC(i)β |2, (2.2)βb|C(i) = argminβ|Xb(i)−XC(i)β |2, (2.3)Ya|C(i) = Xa(i)−XC(i)βa|C(i), (2.4)Yb|C(i) = Xb(i)−XC(i)βb|C(i). (2.5)For clarity, in the following discussion we omit the subscript “ab|C”, and sim-ply use index “i” to emphasize that a variable is associated with subject i.To study the group-level conditional-independence relationships, a group-levelmodel should be introduced for ri. Since partial correlation coefficients are boundedand their sample distributions are not Gaussian, we apply Fisher’s z-transformationto convert (estimated) partial correlation coefficients r to a Gaussian-like distributedz-statistic z = Z(r).The group model we employ iszi = zg+ ei, (2.6)where ei follows a Gaussian distribution N(0,σ2g ) with zero mean and σ2g variance.Consequently, the group-level testing of conditional independence is to used to testthe null hypothesis zg = 0.Because zi is unknown and can only be estimated, the inference of zg shouldbe conducted with zˆi = Z(rˆi). If Xa(i), Xb(i) and XC(i) jointly follow a multi-variate Gaussian distribution, then zˆi asymptotically follows a Gaussian distribution22N(zi,σ2i ) with σ2i = 1/(Ni− p−3), where Ni is the sample size of subject i’s dataand p represents the number of variables in XC(i). Therefore, based on Eq. (2.6),we havezˆi = zg+ ei+ εi, (2.7)where εi follows N(0,σ2i ), and ei follows N(0,σ2g ). This is a mixed-effect modelwhere εi denotes the within-subject randomness and ei denotes the inter-subjectvariability. At the group level, zˆi follows a Gaussian distribution N(zg,σ2i +σ2g ).Note that unlike regular mixed-effect models, the within-subject variance σi2 in thismodel is known, because Ni and p are known given the data X(i) and C. In general,σ2i = 1/(Ni− p−3) is not necessarily equal to σ2j for i 6= j, and the inference of zgshould be conducted in the manner of mixed-models, such as estimating σ2g withthe Restricted Maximum Likelihood (ReML) approach. However, if the samplesize of each subject’s data is the same, then σ2i equals σ2j . For this balanced case,which is typically true in fMRI applications and as well the case in this study, wecan simply apply a t-test to zˆi’s to test the null hypothesis zg = 0.Replacing Step 7 of the single-subject PCfdr algorithm (i.e. the within-subjecthypothesis test) with the test of zg = 0, we can extend the single-subject version ofthe algorithm to its group-level version.Combined with two extensions, the gPC+fdr algorithm is described in Algorithm.1.Algorithm 1 The gPC+fdr algorithm.Input: the multisubject data D, the undirected edges Emust that are assumed to ex-ist in the true undirected graph Gtrue according to prior knowledge, the undirectededges Etest (Emust ∩Etest = /0) to be tested from the data D, and the FDR level q formaking inference about Etest .Output: the recovered undirected graph Gstop.Notations: a, b denote the vertices. E, C denote the vertex set. a ∼ b denotesan undirected edge. adj(a,G) denotes vertices adjacent to a in graph G. a⊥b|Cdenotes the conditional independence between a and b given C.1: Form an undirected graph G from Etest ∪Emust .232: Initialize the maximum p-values associated with the edges in Etest as Pmax ={pmaxa∼b =−1|a∼ b ∈ Etest}.3: Let depth d = 0.4: repeat5: for each ordered pair of vertices a and b that a∼ b∈E∩Etest and |adj(a,G)\{b}| ≥ d do6: for each subset C ⊆ adj(a,G)\{b} and |C|= d do7: Test hypothesis a⊥b|C for each subject and calculate the p-value pa⊥b|Cat the group level.8: if pa⊥b|C > pmaxa∼b, then9: Let pmaxa∼b = pa⊥b|C.10: if every element of Pmax has been assigned a valid p-value by step 9,then11: Run the FDR procedure, Algorithm 2, with Pmax and q as theinput.12: if the non-existence of certain edges are accepted, then13: Remove these edges from G. .14: Update G and E.15: if a∼ b is removed, then16: break the for loop at line 6.17: end if18: end if19: end if20: end if21: end for22: end for23: Let d = d+1.24: until |adj(a,G)\{b}|< d for every ordered pair of vertices a and b that a∼ bis in E ∩Etest .24Algorithm 2 FDR-stepup [12]Input: a set of p-values {pi|i = 1, . . . ,H}, and the threshold of the FDR qOutput: the set of rejected null hypotheses1: Sort the p-values of H hypothesis tests in the ascendant order as p(1) ≤ . . . ≤p(H).2: Let i = H, and H∗ = H (or H∗ = H(1+ 1/2, . . . ,+1/H), depending on theassumption of the dependency among the test statistics).3: whileH∗ip(i) > q and i > 0, (2.8)do4: Let i = i−1.5: end while6: Reject the null hypotheses associated with p(1), . . . , p(i), and accept the nullhypotheses associated with p(i+1), . . . , p(H).2.3 Real ApplicationIn order to assess the real-world application performance of the proposed method,we apply the gPC+fdr algorithm for inferring group brain connectivity network tofMRI data collected from twenty subjects. All experiments were approved by theUniversity of British Columbia Ethics Committee. Ten normal people and tenParkinson’s disease (PD) patients participated in the study. During the fMRI ex-periment, each subject was instructed to squeeze a bulb in their right hand to con-trol an “inflatable” ring so that it smoothly passed through a vertically scrollinga tunnel. The normal controls performed only one trial, while the PD subjectsperformed twice, once before L-dopa medication, and the other approximately anhour later, after taking medication. Because L-dopa has dramatic clinical effectson motor performance, we expected that the extended PCfdr algorithm could detectthe normalizing effect of L-dopa over brain connectivity.Three groups were categorized: group N for the normal controls, group Pprefor the PD patients before medication, and group Ppost for the PD patients aftertaking L-dopa medication. For each subject, 100 observations were used in thenetwork modeling. For details of the data acquisition and preprocessing, please25Table 2.1: The name and abbreviation of 12 selected brain ROIs.Full Name of Brain Region AbbreviationLeft/Right lateral cerebellar hemispheres lCER, rCERLeft/Right globus pallidus lGLP, rGLPLeft/Right putamen lPUT, rPUTLeft/Right supplementary motor cortex lSMA, rSMALeft/Right thalamus lTHA, rTHALeft/Right primary motor cortex lM1, rM1“l” or “r” in the abbreviations stand for “Left” or “Right”, respectively.refer to [111]. 12 anatomically-defined Regions of Interest (ROIs) were chosenbased on prior knowledge of the brain regions associated with motor performance(Table 2.1).We utilized the two extensions of the PCfdr algorithm and learned the struc-tures of first-order group dynamic Bayesian networks from fMRI data. Becausethe fMRI BOLD signal can be considered as the convolution of underlying neuralactivity with a hemodynamic response function, we assumed that there must bea connection from each region at time t to its mirror at time t + 1. We also as-sumed that there must be a connection between each region and its homologousregion in the contralateral hemisphere. The TR-interval (i.e., sampling period) wasa relatively long 1.985 seconds; we restricted ourselves to learn only connectionsbetween ROIs without time lags. In total, there are 12+6= 18 pre-defined connec-tions, and 12× (12−1)÷2−6= 60 candidate connections to be tested. The brainconnectivity networks (with the target FDR of 5%) learned for the normal (groupN) and PD groups before (group Ppre) and after ( group Ppost ) medication are com-pared in Figure 2.1. Note the connection between the cerebellar hemisphere andcontralateral thalamus in the normal subjects, and between the Supplemenatry Mo-tor Area (SMA) and the contralateral putamen, consistent with prior knowledge.Interestingly, in Ppre subjects, the left cerebellum now connects with the RightSMA, and the Right SMA <—>left putamen connection is lost. Also, there arenow bilateral primary motor cortex (M1) <—>putamen connections seen in thePpre group, presumably as a compensatory mechanism. After medication (Ppost),the Left SMA <—>Left thalamus connection is restored back to be normal.262.4 Conclusion and DiscussionGraphical models to infer brain connectivity from fMRI data have relied on the as-sumption that if a model accurately represented the overall activity in several ROIs,the internal connections of such a model would accurately reflect underlying brainconnectivity. The PCfdr algorithm was designed to loosen this overly-restrictiveassumption, and asymptotically control the FDR of network connections inferredfrom data.In this chapter, we first presented the PC+fdr algorithm, an extension of the PCfdralgorithm, that allows for incorporation of prior knowledge of network structureinto the learning process, greatly enhancing its flexibility in practice. The PC+fdralgorithm handles prior knowledge with two inputs: Emust , i.e. the set of edgesthat is assumed to appear in the true graph, and Etest , the set of edges that areto be tested from the observed data. Another extension to PCfdr algorithm wedescribed here was the ability to infer brain connectivity patterns at the group-level, with inter-subject variance explicitly taken into consideration. Combinedwith two extensions, the proposed gPC+fdr algorithm is able to make inference atthe group level by incorporating the prior knowledge and controlling the error rate.When applying the proposed gPC+fdr to fMRI data collected from PD subjectsperforming a motor tracking task, we found group evidence of disease changes(e.g. loss of Left cerebellar <—>SMA connectivity), compensatory changes inPD (e.g. bilateral M1 <—>contralateral putamen connectivity) and evidence ofrestoration of connectivity after medication (Left SMA <—>Left thalamus). Thetremendous variability in clinical progression of PD is likely due to variabilitynot only in disease rate progression, but also in variability in the magnitude ofcompensatory changes. This highlights the importance of the proposed method,as it allows robust estimation of disease effects, compensatory effects and effectsof medication, all with a reasonable sample size, despite the enhanced intersubjectvariability seen in PD.27(a) (b)(c)Figure 2.1: (a) Learned brain connectivity for the normal group (group N).(b) Learned brain connectivity for the PD group before medication(group Ppre). (c) Learned brain connectivity for the PD group after med-ication (group Ppost). Here “L” and “R” refer to the left and right sidesrespectively. The solid lines are predefined connectivity, and the dashedlines are learned connectivity.28Chapter 3A Genetically-informed, GroupBrain Connectivity ModelingFramework3.1 IntroductionSchizophrenia, a chronic, disabling mental disorder, has profound health care im-pact. While the exact pathoetiology of the condition is unknown, genetics playsa vital role, with inheritability estimated to be as high as 80% [122], and environ-mental and genetic/environmental interactions explaining the remainder [20]. Thusthe ability to merge genetic information and their phenotypes, and intermediate en-dophenotypes based on imaging features, may play a key role in understanding thedisease [3].Exploring the genetic influences on structural and functional brain imaging isimportant for the investigation of partially-inherited psychiatric diseases includ-ing Schizophrenia [8]. However, how to deal with high-volume datasets inher-ent to both genetic and imaging studies remains a challenge. Early research usu-ally involved candidate gene studies. For example, genetic variations coding forthe serine-threonine protein kinase, AKT1, were shown to alter human prefrontal-striatal structure and function [139]. This has rapidly been extended to multivariate29approaches allowing for voxel-wise/genome-wide association studies [51, 65, 151].These have led to e.g., the observation that there is strong genetic influence indefault-mode connectivity [53].Schizophrenia appears to be particularly suited for assessing the associationsbetween genetics and brain connectivity. Neonates at genetic risk for Schizophre-nia have reduced overall anatomical connectivity compared to neonates without thesame risk [129]. Previous research has suggested that functional brain connectivityappears to be more sensitively altered in Schizophrenia than amplitude-based fMRIunivariate voxel-wise methods, such as entropy analysis [8, 95]. Additionally, byrepresenting brain activity as a connectivity network, graph theoretical analysiscan be applied to study structure and topology properties [15], providing a way tocharacterize brain activity at the system level.A number of mathematical models have been introduced for studying brainconnectivity [132] including graphical models, such as Dynamic Causal Models[47] and Bayesian Networks [83]. As biomedical experiments typically involvegroups of subjects in order to make inferences about a population, several groupmethods have been proposed, such as Bayesian model selection [136], Group Co-variance estimation [149], multi-subject search algorithm [119], and the groupPC f dr algorithm [88]. These group analyses attempt to stratify subjects based one.g., disease states and treat the subjects equally within the group. However, inneuroimaging studies, simply grouping subjects based on disease states alone mayneglect heterogeneity in the population, possibly violating statistical assumptionsused in subsequent analyses.In the following proposed approach, we place emphasis on group fMRI con-nectivity estimation, as we are interested in effectively modelling intra-group diver-sity while still allowing for differentiation between subjects in a graphical modelframework. We propose to first categorize subjects within each group accordingto ancillary (e.g. genetic) information, allowing the diversity in brain connectivityamong subjects to be modeled more accurately, yet still fitting the overall mul-tivariate data well at the group level. This approach is essentially equivalent toincorporating prior knowledge into the brain connectivity analysis.We propose the use of prior information, in this case, Single nucleotide poly-morphisms (SNPS) in the D-amino acid oxidase activator (DAOA) gene, to inform30group brain connectivity networks. In brain connectivity modeling, prior knowl-edge can be used for such purposes as variable/model selection which can benefitthe computational efficiency, and/or specifying which ROIs to use, easing final bio-logical interpretation [47, 54, 132]. Incorporating prior knowledge in modelling isusually done with confirmatory models, and could conceivable have a detrimentaleffect on models employing a purely exploratory approach [84]. However, incor-porating a priori knowledge into a data driven method may actually provide moreflexibility [88]. While prior knowledge is a useful tool in the variable/model se-lection process, information from other modalities, such as genetic informationprovides complementary information to guide brain connectivity pattern estima-tion.Since the neuroscientific interpretation of brain connectivity analysis is largelybased on the pattern of inferred connections rather than just ensuring that the modelthat fits the overall data well, imposing such prior knowledge into a connectivitymodel will greatly assist the final biological interpretation.The proposed approach is different from previous approaches for geneticallyinfluenced brain interactions. Previous approaches can be considered as separate-model approaches, where a possibly common strategy is used to analyze brainconnectivity and genetic features separately, and then univariate/multivariate as-sociations between features are explored. In contrast, our proposed approach is adirect joint-model approach where the known genotypic information is explicitylymodeled as prior knowledge in brain connectivity modeling. A penalized groupfused regression model is used based on the time courses from a priori specifiedROIs. Each ROI time course is in turn predicted from all other ROI time coursesat zero lag using a group regression model with a penalty term incorporating geno-typic dissimilarity (Fig. 3.1). While we emphasize genetic information as priorknowledge for informing the connectivity model, we note that it is also possible tocombine the proposed group modeling approach with other prior information, suchas clinical indices.In the rest of this chapter, we first introduce a group fused brain connectivitymodeling approach including the model, objective function, and the implementa-tion method. We name the proposed penalized regression model as Overlappedgroup fused model (OGFM). We then incorporate simulations to compare the per-31Figure 3.1: The proposed framework for genetically-informed group brainconnectivity modeling.formance of the proposed method with other state-of-the-art modeling approaches.Finally, we apply the proposed approach to a joint resting-state fMRI/genetic dataset obtained from eleven subjects with Schizophrenia and nine normal control sub-jects and demonstrate that brain connectivity patterns are jointly modulated by dis-ease states and genotypes.3.2 MethodsWe first introduce the group modeling approach by incorporating genetic informa-tion as a priori knowledge. Asymptotic analysis of large samples, given in the32appendix, is further used to justify the proposed method.3.2.1 Overlapped Group Fused ModelPrior studies have proposed regression approaches to model brain connectivity,such as autoregressive models [144] or the group LASSO (Least Absolute Shrink-age and Selection Operator) model [23]. In the approach used in the group LASSOmodel, and the one we adopt here, the fMRI time course of an ROI is regarded asthe response variable, and is predicted from the time courses of all other ROIs atzero-lag as,Y j = X− jβ + e, (3.1)where Y j is the time course of the jth ROI, X− j is the predictor matrix based onthe timecourse of all other ROIs except the jth, β is the coefficient vector and e isthe Gaussian noise term. Note that since both Y j and X− j have similar autocorre-lation properties, it is not necessary to whiten the regressors to ensure iid Gaussianresiduals. Also based on previous studies, connections between brain regions canbe considered as a sparse network [23]. One computationally efficient approach topromote sparsity in the coefficient vector is to use an l1 penalty on the regressioncoefficients – i.e. the regularized LASSO method [141]. Various extensions of l1penalized regression methods have been proposed, for example, the group robustLASSO method alluded to earlier [23], the two-graph guided multi-task LASSOfor eQTL mapping [26], and the variable selection model with elastic net penalty[161].In this study, we introduce an overlapped group fused model (OGFM) which isable to control the variability in the group modeling by adopting a fusion penaltyinto the LASSO model at the group level. In the following discussion, for simplic-ity, we will represent Y j as Y and X− j as X . Considering a group with S subjects,for each subject i ∈ {1,2, · · · ,S}, Y (i) and X(i) are the response vector and predic-tor matrix, where dimension of Y (i) is N×1 and dimension of X(i) is N×K withN being the number of time points and K representing the number of features (=number of ROIs - 1). Therefore, in the group design as shown in Equ. 3.2, Xtot andYtot is composed of S blocks and coefficient Btot = diag(B(1),B(2), · · · ,B(S)) is a33block matrix of dimension SK× S. B(i) of B represents the coefficients betweenY (i) and X(i) for subject i ∈ {1,2, · · · ,S}.Ytot = (Y (1),Y (2), · · · ,Y (S))= (X(1),X(2), · · · ,X(S))×diag(B(1),B(2), · · · ,B(S))+E= XtotB+E.(3.2)In this model, the off-diagonal elements in Btot are all zeros and we only needto infer the diagonal blocks B(1), · · · ,B(S). To promote a fusion effect across thegroup structures, the grouping relationships are applied to control the structuralsimilarities within the group. Suppose G = (V,E) is a grouping relationship graphapplied on the subjects, where V is a set of vertices representing the subjects and Eis a set of edges representing the relationships between subjects. Then the objectivefunction of the overlapped group fused model that we need to minimize is definedas:f (B) =S∑i=1‖Y (i)−X(i)B(i)‖2F +λS∑i=1‖B(i)‖l1+γ ∑elm∈EW (elm)‖B(l)−B(m)‖l1=S∑i=1N∑j=1(Yj(i)−K∑k=1X jk(i)Bk(i))2+λS∑i=1K∑k=1|Bk(i)|+ γ ∑elm∈EW (elm)K∑k=1|Bk(l)−Bk(m)|(3.3)where Y (i) and X(i) are the response vector and predictor matrix for subject i ∈{1,2, · · · ,S} and Yj(i) is the jth sample in Y (i). X j•(i) and X•k(i) stands for thejth sample row and kth feature column in the predictor matrix X(i). B(i) is thecoefficient vector between Y (i) and X(i), and Bk(i) represents the kth coefficientcorresponding to X•k(i). W (elm) is a weight assigned to edge elm ∈ E to control thegrouping effect between subject l and m. Here we set the weight W (elm) of edgeelm as the square value of similarity score which is calculated based on the prior34information (the genetic variations in this study). With this formulation, the highergrouping score leads to less discrepancy in models between subjects l and m. Thefirst l1 penalty is designed to control the sparsity on the learned coefficients, andthe second penalty is designed to control the sparsity on subject differences. Theoptimization problem in Equ. 3.3 can be efficiently solved by a coordinate-descentalgorithm [26, 75]. Optimization of the non-differentiable objective function inEqu. 3.3 can be transferred to an iterative minimization of its surrogate differen-tiable function (Equ. 3.4):minimizeBS∑i=1N∑j=1(Yj(i)−K∑k=1X jk(i)Bk(i))2+λS∑i=1K∑k=1(Bk(i))2/Dki+ γ ∑elm∈EW 2(elm)K∑k=1(Bk(l)−Bk(m))2/Cklmsub ject to ∑k,iDki = 1, ∑k,l,mCklm = 1,and Dki,Cklm ≥ 0(3.4)We can optimize Equ. 3.4 by its Lagrangian form and obtain the solutions as,Dki =|Bk(i)|∑Kk′=1∑Si′=1 |Bk′ (i′)| (3.5)Cklm =W (elm)|Bk(l)−Bk(m)|∑Kk′=1∑el′m′∈E W (el′m′ )|Bk′ (l′)−Bk′ (m′)|(3.6)35Bk(i) =(N∑j=1X jk(i)R jk(i)+ γ ∑eil∈EW 2(eil)Bk(l)/Ckil+γ ∑emi∈EW 2(emi)Bk(m)/Ckmi)/( N∑j=1(X jk(i))2+λ/Dki+γ ∑eil∈EW 2(eil)/Ckil + γ ∑emi∈EW 2(emi)/Ckmi)where R jk(i) = [Yj(i)− ∑k′ 6=kX jk′ (i)Bk′ (i)].(3.7)Here k′ represents the feature index and j′ represents the sample index. The leastsquare solution between Y (i) and X(i) is used as initial value for B(i) of B. Fromthe initial value B, we iteratively update D, C and B until B converges. The sum-mary of the algorithm implementation is shown in Table 3.1. Note that when λis zero, it becomes regression with the group fused penalty only. The asymptoticproperties of this estimate are given in the appendix.3.2.2 Model Selection and Degree of FreedomA key issue of our OGFM is to determine the tuning parameters λ and γ , whichwill affect the subsequent model selection and parameter estimation. We proposeto select the optimal λ and γ according to the Bayesian information criterion (BIC),which works as follows. We first specify a grid of the tuning parameter pairs (λ ,γ).For each pair of (λ ,γ), we calculate the Degree of freedom (DF) of the proposedOGFM estimate dˆ f (λ ,γ) defined in Equ. 3.12. In essence, the degree of freedomquantifies the number of free parameters in the estimated OGFM (Table 3.1). Then,the estimated dˆ f (λ ,γ) is used for calculating the BIC,BIC(λ ,γ) = N · ln(σˆe2)+ dˆ f (λ ,γ) · ln(N), (3.8)where σˆe2 is the estimated variance of the residuals. Finally, the optimal tuningparameters are defined as,36Table 3.1: Implementation of overlapped group fused model.Input The number of subjects S, {X(i)}i=1,2,..S,X(i) ∈ RN×K ,{Y (i)}i=1,2,..S, Y (i) ∈ RN , graph structure G,the corresponding weight matrix W , a pair of tuning parameter (λ ,γ)and threshold of convergence θ .Initialization set B(i) = minimizeB(i)‖Y (i)−X(i)B(i)‖2∗Iterate1. Eq. 3.5Dki =|Bk(i)|∑Kk′=1∑Si′=1 |Bk′ (i′)| for k = 1,2, ...,K and i = 1,2, ...,S2. Eq. 3.6Cklm =W (elm)|Bk(l)−Bk(m)|∑Kk′=1∑el′m′ ∈EW (el′m′ )|Bk′ (l′)−Bk′ (m′)|for k = 1,2, ...,K and l,m = 1,2, ...,S3. Eq. 3.7Bk(i) =∑Nj=1 X jk(i)R jk(i)+γ∑eil∈E W2(eil)Bk(l)/Ckil+γ∑emi∈E W2(emi)Bk(m)/Ckmi∑Nj=1(X jk(i))2+λ/Dki+γ∑e jl∈E W2(e jl)/Ck jl+γ∑emi∈E W2(emi)/Ckmifor k = 1,2, ...,K and i = 1,2, ...,SUntil convergence of B (reaches the threshold of convergence θ ).Output The coefficient B(λ ,γ) = BThe least square solution.(λˆopt , γˆopt) = argminλ ,γBIC(λ ,γ). (3.9)It remains to determine the df for the OGFM. For the regression model, thedegree of freedom can be estimated as [41],d f =∑Ni=1 cov(yi, yˆi)σ2(3.10)where yˆi is the estimated values of yi, and σ2 is the variance of yi with X fixed.In a general linear model with N > K, the degree of freedom is K. In the LASSOmodel, the degree of freedom is usually considered as the number of non-zero co-efficients and the upper bound is min(K,N). In the fused LASSO model [142],the degree of freedom is defined as the number of non-zero coefficient blocks.Following this line, one straightforward way to define the degree of freedom for37Figure 3.2: The comparison of estimated degree of freedom (df) and truedegree of freedom. X axis represents the estimated df and y axis repre-sents the true df. The solid line is the least square regression betweentrue df and estimated df, and the dashed line is the 45◦ degree line.the proposed model is to count the number of unique non-zero values of coef-ficients across the subjects for each corresponding feature. For instance, in ourstudy, B(i) is a K× 1 coefficient vector for subject i ∈ {1,2, · · · ,S} and K is thenumber of features. Then for feature j ∈ {1,2, · · · ,K}, each unique non-zero val-ues in B( j,1),B( j,2), · · · ,B( j,S) counts one degree of freedom,d f =K∑j=1#{unique nonzero coefficients inB( j,1), · · · ,B( j,S)} .(3.11)It is clear that the degree of freedom depends on the true coefficient matrix Band it is unknown. However, we can estimate it by the natural plug-in estimatedefined as,38Table 3.2: Implementation of model selection.Input Number of subjects S, {X(i)}i=1,2,..S,X(i) ∈ RN×K ,{Y (i)}i=1,2,..S, Y (i) ∈ RN , graph structure G,the corresponding weight matrix W , a set of tuning parameterpairs {(λ ,γ)}, and threshold of convergence θ .for each pair of tuning parameter (λ ,γ)1. Input S, X(i)i=1,2,...,S, Y (i)i=1,2,...,S, G, W , (λ ,γ)and θ into OGMF algorithm (Table. 3.1), and get theestimated coefficient B(λ ,γ).2. Calculate BIC(λ ,γ) according to Eq. 3.8end forSelect the optimal parameters according to Eq. 3.9,(λˆopt , γˆopt) = argminλ ,γBIC(λ ,γ).Output the optimal tuning parameter pair (λˆopt , γˆopt)dˆ f (λ ,γ) =K∑j=1#{unique nonzero coefficients inBˆ(λ ,γ)( j,1), · · · , Bˆ(λ ,γ)( j,S)}.(3.12)To verify the estimated degree of freedom defined in Equ. 3.12, we comparethe estimated and true degree of freedom using the simulated data as described inthe Simulation Section. We generate the data set with heterogeneous overlappedgroup structures. As shown in Fig. 3.2, we note that the estimated values are quiteclose to the true values. Based on the comparison results, the estimated degreeof freedom could be used to represent the approximate degree of freedom in themodel selection procedure. The implementation of the model selection is shown inTable Implementation of Brain Connectivity Modeling and StatisticalInferenceTo make inference of brain connectivity networks using linear regression model asshown in Equ. 3.1, we treat each ROI in turn as a response vector and all otherROIs as predictor matrix. This corresponding coefficient vector would give the39strength of connectivity from all other ROIs to the target ROI. For each ROI, thecoefficient vector would be estimated one by one and we could obtain the wholebrain networks for all ROIs.To incorporate the genetic information into brain connectivity estimation, wepropose the framework for genetically-informed group brain connectivity model-ing as shown in Fig. 3.1. The known genotypic information is incorporated asprior information in brain connectivity modeling. In the first step, the similaritiesof SNPs between subjects serve to categorize them within the group, and generatethe relationship graph. Then the similarity score between any two subjects wouldbe squared as the weight in W as described in the Method section. In the sec-ond step, the similarity graph is incorporated into the proposed overlapped groupfused model. As discussed before, the coefficient of connectivity for each ROIis estimated sequentially for all subjects to finally obtain the whole group brainconnectivity network.To robustly assess the estimated coefficients of brain connectivity networks, weutilize the permutation test, as has been previously suggested for fMRI data [55,107]. In brief, for the fixed parameters chosen in the optimal model, the temporalorder of time courses of each subject is permuted, and then the coefficients are re-estimated using the permuted signals. This is repeated L times and we count thenumber of times M that the permuted results are larger than the learned coefficients.The significance of the coefficient is then estimated by pval = M/L. In our study,we choose the significance level at pval = Graph Theoretical AnalysisBased on the learned connectivity networks, we will apply graph theoretical anal-ysis to extract the network features. In the brain connectivity networks, the nodesrepresent brain Regions of Interest (ROIs) and the edges represent the relationship/-cooperation between ROIs. Though the brain connectivity networks themselves areof great interest, the graph theoretical measures could help us quantitatively charac-terize the networks. It has been widely used to explore the structural and functionalproperties of brain connectivity networks [15].In this study, we utilize several popular graph measures in terms of density,40global efficiency and node degree [15]. Global efficiency describes communica-tion efficiency of the entire graph which is calculated by the average of the inverseshortest path between any pairs of nodes in the graph. Density is defined as thefraction of present connections to all possible connections. For a local networkmeasure, we will employ node degree, which is the total number of edges con-nected with a given node.3.3 SimulationsWe performed simulations to compare the performance of the proposed overlappedgroup fused model with that of original LASSO and group LASSO methods. Themulti-subject time courses were generated from a Gaussian model. In each simula-tion, the number of subjects was chosen as S = 12 and the number of features wasset to K = 50. The data were generated as follows:(1). The group structures were assigned for 12 subjects. All the subjects arecategorized into several overlapped subgroups with a few shared coefficients acrosssubgroups (Fig. 3.3).(2). The coefficient vector b was generated for each subject according to theirgrouping effects. For ease of interpretation, the vector b was binary.(3). The design matrix X for each subject was randomly generated containingN observations and K predictors. The error e was Gaussian noise ∼ N(0,1). Togenerate the relationship graph G, each element Gi j in G was determined by thestructure similarity between subject i and j by calculating their structure overlaps.The weight of Gi j was the square value of the corresponding similarity score.In the simulations, we tested the performances of the algorithms as a func-tion of the number of time points N. For reliable assessment, each procedure wasrepeated fifty times and we compared the averaged performances of the differ-ent algorithms. In the simulation, large variability was considered in the groupmodeling where the subjects could be categorized into several partially overlappedsubgroups, and the type I error rate, detection power, FDR and F1 score were com-pared at sample sizes of 70 to 300 time points.FDR is defined as the expected ratio of spurious connections to all the learnedconnections. FDR can be more reasonable in biomedical studies, since it’s directly41related with the uncertainty of the reported positive results. F1 score was used toevaluate the general performance by considering both the Type I and Type II errorrates,F1 =2∗True positive2∗True positive+False negative+False positive (3.13)Figure 3.3: One example of the heterogeneous overlapped group structureused in the simulation.Simulation results are shown in Fig. 3.4. In the simulation, the proposedmethod steadily controlled the type I error rate at < 0.05. The detection powerof the proposed method increased with increasing number of sample points. In ourproposed overlapped group fused model, the dimension of the group design matrixis N ∗KS. Considering KS > N, the detection power was < 1 and improved withN increasing. However, the LASSO method recovered the coefficients of eachsubject separately; and in the group LASSO model, the dimension of the designmatrix is NS ∗KS where the number of samples is larger than the number of fea-tures. As a result, they can obtain a higher detection power. However, the FDR andF1 scores shown in Fig. 3.4 demonstrated that our proposed model had generallybetter performance in recovering the heterogeneous group structures.The results here show that the overlapped group fused model can accurately42(a) (b)(c) (d)Figure 3.4: Simulation results: Simulation is designed to recover a hetero-geneous group network based on measures such as Type I error rate,detection power, FDR and F1 score as the number of data points in-crease. OGFM represents the proposed overlapped group fused model.(a). Comparison of the Type I error rate. (b). Comparison of the de-tection power. The lines of detection power of Lasso and Group Lassomodels are identical. (c). Comparison of the FDR. (d). Comparison ofthe F1 score.43recover the heterogeneous group structures with relatively high detection power.When applied to the real application, it provides a way to investigate the diversityin the population.3.4 Real Application in Schizophrenia3.4.1 SubjectsEleven subjects with Schizophrenia and nine control subjects were recruited in thisstudy. All the experiments were approved by the local institutional Ethics Com-mittee, and all subjects provided written, informed consent prior to participation.All the patients were diagnosed as having Schizophrenia by a psychiatrist usingstandard clinical criteria and all control subjects did not have any history of neuro-logical illness and/or brain trauma. A full description of subject demographics canbe found in [22].3.4.2 Selection and Genotyping of SNPsStrong evidence suggests that the glutamatergic system may contribute to the patho-physiology of Schizophrenia [71]. The DAOA gene is related to the glutamater-gic system and our previous study already confirmed its associations with Sch-izophrenia [22]. Here we investigate the diversity in brain connectivity under thecontrol of the DAOA gene. Genomic DNA was extracted from peripheral bloodleukocytes using a standard phenol/chloroform method [22]. Eight SNPS presentin the DAOA gene were selected including rs3916966, rs2391191, rs3918342,rs3918341, rs778294, rs9558562, rs3916967, and rs1421292.We first used the polymorphisms of the DAOA gene to categorize the subjectsby calculating their genotype similarity graph G. The similarity score for each pairof subjects was determined by their overlapped percentage of all 8 SNPs and thethreshold was set to 0.5 where the small similarity score was removed to zero. Thesimilarity graph was then used in the overlapped group fused model to guide thebrain connectivity modeling in the normal and patient groups. BIC was applied tochoose the optimal model. The permutation test was employed finally to determinethe significant brain connections.44Table 3.3: The index and name of 52 selected brain ROIs. ’L’ represents thebrain left side and ’R’ represents the brain right side.Index Name Index Name1 L-ACC 27 R-ACC2 L-Amygdala 28 R-Amygdala3 L-Calcarine 29 R-Calcarine4 L-Caudate 30 R-Caudate5 L-Cingulum Mid 31 R-Cingulum Mid6 L-Cingulum Post 32 R-Cingulum Post7 L-Cuneus 33 R-Cuneus8 L-Frontal Mid 34 R-Frontal Mid9 L-Hippocampus 35 R-Hippocampus10 L-Insula 36 R-Insula11 L-Occipital Inf 37 R-Occipital Inf12 L-Occipital Mid 38 R-Occipital Mid13 L-Occipital Sup 39 R-Occipital Sup14 L-Pallidum 40 R-Pallidum15 L-Parahippocampus 41 R-Parahippocampus16 L-Parietal inf 42 R-Parietal inf17 L-Parietal sup 43 R-Parietal sup18 L-Putamen 44 R-Putamen19 L-Rectus 45 R-Rectus20 L-Temporal Inf 46 R-Temporal Inf21 L-Temporal Mid 47 R-Temporal Mid22 L-Temporal Sup 48 R-Temporal Sup23 L-Thalamus 49 R-Thalamus24 L-Total Cingulum 50 R-Total Cingulum25 L-Total Frontal Inf 51 R-Total Frontal Inf26 L-Cerebellum 52 R-CerebellumIn addition to the study of combined polymorphisms of the DAOA gene, wealso studied the effects of a single SNP on the brain connectivity patterns in thepatient group. Two subgroups were categorized by the different genotypes ofrs2391191 and connectivity patterns were compared under the modulation of rs2391191.453.4.3 fMRI DataA SIEMENS TRIO 3-T scanner was used to collect data in the resting state. Beforescanning, all the subjects were instructed to lie on their back in the scanner withtheir eyes closed and have several minutes to acclimatize themselves to the scan-ner environment. The resting-state fMRI data was acquired with a 2D echo pla-nar imaging pulse sequence with parameters TR/TE= 2000/30; thickness/gap=4/0mm; matrix=64×64; FOV=192×192 mm; flip angle=90o. In addition, high res-olution T1 scans were also used for localization using a 3D-FLASH sequence:TR/TE=14/4.92ms; thickness/gap = 1.5/0.3 mm; matrix=256×192; FOV=230×230mm; flip angle=25o; 120 slices. All the image data collected were reconstructedusing the software MRIConvert (http://lcni.uoregon.edu/jolinda/MRIConvert/). Thefirst 10 volumes were discarded. Slice timing correction was performed and cor-rected. Any subjects with > 1.0mm of translation and/or 1.08 ◦ of rotation wasexcluded. Images were then spatially normalized to the MNI (Montreal Neurolog-ical Institute) standard EPI template. The fMRI data were then bandpass filtered at0.01 ∼ 0.08 Hz and spatially smoothed by a 4×4×4 FWHM Gaussian kernel.Fifty-two brain ROIs were chosen to learn the brain connectivity networks asshown in Table 3.3. These included representative regions from visual, motor,sensory, attentional, cerebellar, basal ganglia and default mode networks.3.4.4 ResultsWe first estimated brain connectivity networks in normal and patient groups basedon combined 8 SNPs. The comparison of the Schizophrenia and normal controlgroups are shown in Fig. 3.5. Common connections and significantly differentconnections in two groups are shown in Fig. 3.5(a) and (b). A χ2 test was employedto decide the significance of the connections related to the disease states. Greenconnections in Fig. 3.5(b) are unique to normal individuals (p < 0.05) and redconnections are unique to the patient group (p < 0.05).Compared to the patient group, the normal group had higher connectivity den-sity (Normal: 0.0667± 0.0058; Patient: 0.0578± 0.0008). The global efficiencydemonstrated that the patient group had lower communication ability in their brainconnectivity networks (Normal: 0.2455± 0.0219; Patient: 0.2344± 0.0056). In-46terestingly, the schizophrenia group had smaller ratio of inter-hemispheric connec-tions as shown in Fig. 3.5(d).(a) (b)(c) (d)Figure 3.5: (a) Common connections for all the subjects in normal and Sch-izophrenia groups. (b) Significantly different connections associatedwith normal and Schizophrenia groups. Green connections are associ-ated with the normal control group only and red connections are asso-ciated with the Schizophrenia group only. (c) Density and global effi-ciency observed across groups. (d) Density assessed within hemisphereand across hemispheres across groups.A number of regions were significantly modulated by the disease states as wellas genotypes including the left putamen as shown in Fig. 3.6, the right posteriorcingulate gyrus and left middle frontral gyrus (p < 0.05).Examples of the subject brain connectivity networks are displayed in Fig. 3.7.47(a) (b)(c) (d)(e)Figure 3.6: (a) The comparison in terms of node degree at the Left Putamenregion in both control and Schizophrenia subjects. (b) - (e) Node de-gree values of the Left Putamen region in Schizophrenia subjects as afunction of rs3916966, rs2391191, rs3918341, rs778294 and SNP statusrespectively.48(a) (b)(c) (d)Figure 3.7: Examples of brain connectivity networks in the group modeling.(a) - (d). Computed brain connectivity for Subjects 01, 03, 05, and 08 inSchizophrenia group. The colors in the figure represent the coefficientweights of the connectivity.Four subjects (Subject 01, 03, 05, 08) in Schizophrenia group were chosen in Fig.3.7. According to their genotypes, subjects in Fig. 3.7 (a) and (b) are more similarwith each other (genotype similarity is 0.875), while subjects in Fig. 3.7 (c) and (d)are categorized with higher similarity score (genotype similarity is 1). The colorsin the figures represent the coefficient weights which are the regression coefficientsestimated from the proposed model. Although the four networks share part of thecommon connections, we found that the learned networks in Fig. 3.7(a) and (b)are closer in their connectivity structures (structure similarity is 0.8218), and the49(a) (b)(c) (d)Figure 3.8: (a)-(b). The connectivity patterns for the subjects with rs2391191AG and AA genotypes. The colors in the figure represent the frequencyof the learned connections in the subjects. (c). The comparisons ofdensity and global efficiency between two subgroups. (d). The compar-isons of density of within hemisphere and across hemisphere betweentwo subgroups.50networks in Fig. 3.7(c) and (d) share more common connections compared withother subjects (structure similarity is 0.8608). This example demonstrated that theproposed method can make inference at the group level, and also can model thediversity in the population modulated by the genetic information.In addition to the study of combined polymorphisms of the DAOA gene, wealso studied the effects of a single SNP on the brain connectivity patterns. InFig. 3.8, two subgroups were categorized by the different genotypes of rs2391191(“AA” and “AG” carriers). As described before, we applied the proposed over-lapped group fused model and permutation test to determine the brain connectivitynetwork for each subject in the patient group. We summarized the structure of eachsubject connectivity network within one subgroup together to represent the connec-tivity networks for each subgroup. In Fig. 3.8 (a) and (b), the colors represent thefrequency of the learned connections. In each subgroup, the subject connectivitynetworks maintained high similarity in the structure; while between subgroups,they had different connectivity patterns. As before, global efficiency and densitywere used in the connectivity patterns comparison. Subjects identified with “AG”carriers have higher connectivity density and larger global efficiency.3.5 Conclusion and DiscussionIn this chapter, we have proposed a framework to incorporate genetic factors intogroup brain connectivity modeling. We used a fused regression group model thatwas capable of recovering different connectivity patterns that could effectively dealwith individual variability as well as group similarity in both simulations and realfMRI data. In our study, the coordinate descent approach is adopted to solve theoptimization problem. It’s worth noting that other algorithms can also be usedincluding the primal-dual method to achieve better computational efficiency [33].Using the proposed approach, we demonstrated the changes in connectivity pat-terns under the effect of a specific genetic variation and how this was modulatedby disease.The proposed approach is a novel way for incorporating genetic informationinto imaging analysis. Unlike previous separate-model approaches where geneticand imaging data were examined separately and then later joined in post-hoc fash-51ion, we incorporated prior genetic knowledge directly into a data-driven modelwhich makes the model more flexible and assists in the final biological interpreta-tion. Furthermore, we have made special efforts to account for intersubject vari-ability. Nevertheless, a potential weakness of our approach is that we have reducedthe high-dimensional genetic data to a scalar similarity score. However, incorpo-rating a multidimensional similarity score into the objective function is a directextension of our work. It is worth to mention that incorporation of the genetic in-formation into fMRI signal modeling can be considered as the data fusion approachfor multi-modal analysis. The popular data fusion methods include the canonicalcorrelation analysis (CCA) based [27, 35] and Independent Component Analysis(ICA) based [28, 91, 103] approaches with different statistical assumptions. Thedata fusion analysis would gather complementary information and benefit the bio-logical interpretation in neuroimaging studies. Another limitation of the proposedapproach is the usage of the permutation test in the brain network estimation whichmay potentially break down the temporal information contained in the data. Thealternative ways are to permute the time courses in the Fourier domain or waveletdomain by randomly resampling the phase of the time series [115].A myriad of factors can affect brain connectivity, and the single SNP examinedhere will likely only account for a small fraction of observed inter-subject vari-ability. Nevertheless, the genetic information in this model used as prior knowl-edge still resulted in significant connectivity alterations that were also a functionof disease states. We found changes in connectivity in the left putamen, consis-tent with prior PET studies demonstrating asymmetric D2 receptor changes in theputamen [44], and an 18F2β − 3β − (4− f luoro)tropane ([18F]FCT ) study thatfound asymmetric binding potentials in the putamen of chronic schizophrenic pa-tients [78]. Similarly, we found changes in the posterior cingulate connectivity,where decreased grey matter volumes are associated with a poor prognosis in Sch-izophrenia [102]. A meta-analysis suggested decreased grey matter in a numberof brain regions in schizophrenic subjects, including the left middle frontal gyrus[52]. It is tempting to speculate, based on our results, if some or all of the above re-sults would have been affected by knowing DAOA SNP information of the subjectsexamined.This framework is our first attempt to integrate the genetic variation into the52brain connectivity modeling. A large number of participants will be recruited in ourfuture study. Finally, we note that the proposed framework is sufficiently generalthat it can also be combined with other prior information (e.g. clinical indices) tocontrol the brain connectivity modeling. It is a general framework to estimate thegroup brain connectivity that can model the diversity in a population3.6 Asymptotic PropertiesHere we justify the proposed method by studying the asymptotic properties of ourestimate in the classical framework where S, the number of subjects, and K, thefeatures are fixed and the sample size n→ ∞. Theorem 3.6.1 provides a weakconvergence result for the fusion estimate if the regularization strength parametersλ and γ are chosen properly, implying consistency at the parametric efficient rate.Theorem 3.6.1. For each subject i = 1, · · · ,S, we assume that there exist positivedefinite matrices Ci = plimn→∞n−1X(i)X(i)>. Suppose further λ/√n→ λ0 andγ/√n→ γ0. Then the proposed estimate Bˆ = diag(Bˆ(1), · · · , Bˆ(S)) satisfies√n(bˆ−b)⇒ argmin(V ), (3.14)where bˆ(KS)×1 and b(KS)×1 are vectorized versions of Bˆ and B by stacking theirnon-zero columns, respectively, V : R(KS)×1→ R is given byV (u) =S∑i=1(u>i Ciui−2u>i vi)+λ0S∑i=1K∑k=1[uk(i)sign(Bk(i))I(Bk(i) 6= 0)+ |uk(i)|I(Bk(i) = 0))]+ γ0 ∑elm∈EW (elm){ K∑k=1[(uk(l)−uk(m))sign(Bk(l)−Bk(m))I(Bk(l) 6= Bk(m))+ |uk(l)−uk(m)|I(Bk(l) = Bk(m))]}and {vi}Si=1 are independent random vectors with densities N(0,σ2Ci). In partic-53ular, if max(λ ,γ) = o(√n), then Bˆ is a√n-consistent estimate of B.Proof. The proof is based on a similar argument as the one used in proving Theo-rem 4.1 in [26]; so here we only outline the differences. By definition,√n(bˆ−b)minimizes the functionVn(u) =S∑i=1n∑t=1(u>i Xt(i)√n− eti)2− e2ti+λS∑i=1K∑k=1∣∣∣∣uk(i)√n +Bk(i)∣∣∣∣−|Bk(i)|+ γ ∑elm∈EW (elm)K∑k=1|Bk(l)−Bk(m)+uk(l)−uk(m)√n∣∣∣∣−|Bk(l)−Bk(m)|=: I+ II+ III.Observe thatI =S∑i=1[u>i (n−1∑tXt(i)Xt(i)>)ui]−2S∑i=1n∑t=1u>i Xt(i)√neti.So by our assumption and the central limit theorem, it follows that I⇒∑Si=1(u>i Ciui−2u>i vi), where vi ∼ N(0,σ2Ci) are independent. Then, arguing the limit of term IIand III as in the proof of Theorem 4.1 in [26] and applying Slutsky’s convergencetogether lemma, (3.14) follows from standard epi-convergence results.54Chapter 4A Sticky Weighted Time VaryingModel for Resting State BrainConnectivity Estimation4.1 IntroductionInferring brain connectivity networks from fMRI plays important role for under-standing brain functioning both normally, and in disease states. As mentioned inSection 1, many mathematical methods have been developed for brain connectivitymodeling. However, most current approaches assume that the connectivity struc-ture is time-invariant, i.e., without considering temporal variations of the underly-ing neural activity, and thus the inferred brain connectivity possibly a temporallyaveraged connectivity pattern [92]. Assessing the temporal dynamics of connectiv-ity patterns may therefore represent an additional dimension to assess brain activity[68].Several strategies have thus far been proposed to investigate brain connectiv-ity dynamics. Lagged interaction based approaches, such as dynamic Bayesiannetwork modeling [83] and auto-regressive (AR) models [54], examine brain in-teractions simultaneously and over adjacent time steps. State space model basedapproaches, by combining lagged interaction and filtering theory, estimate non-55stationary brain connectivity at each time point [74]. In addition, time-frequencybased approaches, such as wavelet transform based coherence analysis, infer rest-ing state dynamic brain connectivity from places in the time-frequency plane. [21].Wavelet based time varying Granger causality analysis has also been used to pro-duce evolving brain connectivity maps that are modulated by task performance[124].If brain connectivity networks can be assumed to change slowly and smoothlyover time, a sliding window approach is appropriate. By specifying a fixed windowlength and shifting the window by a given number of data samples, different net-work learning methods such as correlation [61, 67, 140], covariance [1] and ICA[72, 123] have been applied to estimate the time dependent interactions within eachwindow. However, determining the appropriate window length is critical and diffi-cult: with too small a window length, estimated connectivity patterns suffer fromlarge fluctuations due to noise and thus may not truly reflect the underlying dy-namic properties of brain activity; in contrast, too large a window will result ininsensitivity to possibly important brain state changes. In order to avoid the as-sumption that changes occur slowly over time, several studies have reported thatfunctional networks inferred by stationary approaches may be unduly influencedby changes at a few critical time points [92]. These critical time points can be usedto segment the entire signal into quasi-stationary sections for the purposes of brainconnectivity estimation [37, 86, 92]. Nevertheless, change point detection maybe particularly susceptible to artifacts (e.g. due to head motion) in the data. An-other important characteristic of both sliding window and change point detectionmultivariate models is that they assume that different brain regions have temporalvariations at the same time scale so that the entire brain dynamics are assumedto switch simultaneously, while in practice, different pairs of brain regions mayinteract at different temporal scales [68].Beyond the specific area of fMRI brain connectivity modeling, several time-varying frameworks have been proposed to discover multivariate interactions overtime. These include time varying regularized graphical structural estimation [160],linear regression-based Bayesian Network (BN) approaches [120, 133, 153] andchange point detection approaches [76]. In a BN framework, both network struc-ture and parameter changes are treated as random processes whose values at each56time epoch are modelled via a BN approach. In one change point detection model,a fused penalty used in a preliminary linear regression model is used to detectchange points, and then multivariate regression is separately applied to each seg-ment [76]. Another change point detection approach uses penalized regression andGaussian mixture models [114].Based on the above discussion, it is clear that modeling brain dynamics of-ten requires fairly strict assumptions be made, such as assuming the networkschange smoothly, change suddenly, or change in a piece-wise stationary fashion[68]. Moreover, learning dynamic changes in brain interactions may be compli-cated by factors such as head movement, measurement noise and other randomizedfluctuations. To reduce the influence of random noise, it is reasonable to assumethat brain connectivity patterns mostly change smoothly except at critical changepoints. Based on this assumption, temporally adjacent networks are more likely toshare common patterns than temporally distant networks, as assumed in a weightedtime-varying regression model [133], yet abrupt changes can still occur at specificchange points.Therefore, in this chapter, we propose a Sticky weighted time varying (SWTV)model that estimates the non-stationary process of brain interactions in a tem-porally penalized, weighted regression fashion. We incorporate a fused penalty[142] into the weighted regression model [133]. The fused penalty is added into aweighted regression model in which we can estimate both smooth changing coef-ficients and abrupt changing structures so that the change point detection problemwill not be separated from the network estimation problem. More importantly, theproposed method allows different pairs of brain regions to exhibit fluctuations atdifferent time scales, as illustrated in simulations in Section 4.3. Finally, we as-sume connections between spatially disparate brain regions are relatively sparse tofacilitate meaningful biological interpretation.In the remainder of this chapter, we will introduce the SWTV model in Section4.2, and perform simulations to validate the proposed method in Section 4.3. Also,in Section 4.4, the proposed SWTV model is applied to resting state fMRI data setsfrom both Parkinson’s Disease (PD) and control subjects, and significant differenttemporal and spatial patterns are found to be associated with the disease state.574.2 MethodsIn this section, we will briefly introduce regression models used in the brain con-nectivity network modeling at the single subject level, and then we will presentthe proposed sticky weighted time varying model to estimate dynamic interactionsbetween different brain regions in the resting state.4.2.1 Sticky Weighted Time Varying ModelMultivariate linear regression models have been widely used to infer neural inter-actions [89, 121]. As introduced in Section 3.2, the fMRI time course of a ROIis regarded as the response variable, and is predicted from the time courses of allother ROIs at zero-lag as,Y = Xβ + e, (4.1)where vector Y with length T means the time course of one brain ROI, X is T ×Kpredictor matrix based on the time courses of all other ROIs with K+1 representingthe entire number of ROIs, β is the coefficient vector and e is the Gaussian noiseterm. Due to the non-stationary nature of the brain activity, the time dependentregression model becomes,Yt = Xtβ t + e, (4.2)where t represents the time index and we need to estimate the regression coefficientvector at each time point respectively. Yt is the response sample at time point t andXt is the tth sample row in the predictor matrix. In order to make the connectionssparse, one could use an l1 penalty on the regression coefficients. However, withonly one sample point, the estimator of coefficients would be extremely unstable.Thus in order to estimate time-varying structures/coefficients, yet still allow spar-sity, we assume that the underlying networks are changing smoothly over time.Following prior work [133], we could estimate the coefficients for each time pointseparately in a Weighted time varying (WTV) model as,βˆ t∗= argminβ t∗T∑t=1W t∗(t)(Yt −Xtβ t∗)2+λ∥∥∥β t∗∥∥∥l1, (4.3)58where β t∗ is the coefficient vector we need to estimate at time t∗, βˆ t∗ is the esti-mator of β t∗ , and λ is the parameter for the l1 penalty. W t∗(t) is the weighting ofobservations from time t when we estimate the coefficients at time t∗. In general,W t∗(t) can be defined as any normalized kernel function. In this chapter, W t∗(t) isdefined as,W t∗(t) =exp(−(t− t∗)2/h)∑Tt=1 exp(−(t− t∗)2/h). (4.4)This is a normalized Gaussian Radial basis function (RBF) kernel, with h repre-senting the kernel band-width. Note that this model is essentially a sparse weightedregression model that allows us to estimate the coefficients at each time point sep-arately by reweighting the observations. With the smoothly changing assumption,temporally adjacent coefficients are more likely to be similar than temporally dis-tant coefficients.A “sticky” weighted time varying model is therefore introduced as,minimizeβ t∗ ,t∗∈RTT∑t∗=1T∑t=1W t∗(t)(Yt −Xtβ t∗)2+λT∑t∗=1∥∥∥β t∗∥∥∥l1+ γT∑t∗=2∥∥∥β t∗−β t∗−1∥∥∥l1,(4.5)where γ is the parameter to control the fused penalty and serves to keep the coeffi-cients temporally consistent except at (possibly several) abrupt change points.To efficiently solve this optimization problem, we can rewrite the responsevector and predictor matrix as Y t∗t =√W t∗(t)Yt and X t∗t =√W t∗(t)Xt . Let Y t∗=(Y t∗1 ,Yt∗2 , · · · ,Y t∗T )′, X t∗= (X t∗1′,X t∗2′, · · · ,X t∗T′)′, then the weights can be incorpo-rated into the square loss function directly. We can further simplify the objectivefunction by expressing them in a matrix format. Suppose Y˜ = (Y 1′,Y 2′, · · · ,Y T ′)′is a response vector with length T T , X˜ = diag(X1,X2, · · · ,XT ) is a block diagonalmatrix with dimension T T ×T K, and β˜ = (β 1 ′ ,β 2 ′ , · · · ,β T ′)′ is the concatenatedtime varying coefficient vector with length T K. Each β t corresponds to the coeffi-cient vector at time point t. The objective function can be formulated as,59minimizeβ˜∥∥∥Y˜ − X˜ β˜∥∥∥2F+λ∥∥∥β˜∥∥∥l1+ γ∥∥∥Cβ˜∥∥∥l1. (4.6)where C is a sparse (T −1)K×T K matrix with two nonzero elements in each row.More specifically, C((T − 1) ∗ (i− 1)+ j,K ∗ ( j− 1)+ i) = −1 and C((T − 1) ∗(i−1)+ j,K ∗ j+ i) = 1, i = 1,2, · · · ,K, j = 1,2, · · · ,T −1. By this formulation,it becomes a generalized fused LASSO problem which can be solved using thesmoothing proximal gradient (SPG) method [25]. SPG is one efficient algorithmwith the convergence rate of O( 1ε ) where ε is the precision of the algorithm; Per-iteration complexity of SPG is linear with the number of nonzero elements of theconstructed sparse network C [25]. In this study, we use the SPG optimizationtoolbox [25], and the implementation of the SWTV model is described in Table.4.1. It’s worth noting that other algorithms can also be used including the primal-dual method to solve this optimization problem [33].4.2.2 Model SelectionThe parameters of stationary regression model are usually determined by cross val-idation (CV) which separates the data into training and testing sets. However, thestandard CV approach cannot be employed directly in our time-varying case, sinceeach sample corresponds to a specific time point and the structures and coefficientsmay be different across time.Therefore, in order to apply cross validation, we first up-sample the data by afactor of two: the odd samples represent the original data points and even samplesare the interpolated data points. For the purpose of model selection, we assume thatthe corresponding even samples have the same temporal properties as those of oddsamples. In the following simulation studies, by treating the odd samples as thetraining set and even samples as the testing set, we can select optimal parametersof the model.For each fixed set of parameters λ ,γ and the band width h, we can estimatetime-varying coefficients as described in Table. 4.1 and use the cross validationto select the optimal values. However, for large scale data sets, cross validationcan be time consuming, and it may not be feasible for the large scale problems.Instead, the gradient descent approach can be applied to iteratively update each60Table 4.1: Implementation of sticky weighted time varying model.Input: Y ∈ RT , X ∈ RT×K , regularization parameters λ , γ and band width hStep 1: Weighting the response vector Y and the predictor matrix X .1. Based on h, calculate W t∗(t) according to Eq. (4.4),for t = 1,2, · · · ,T , t∗ = 1,2, · · · ,T .2. Y t∗t ←√W t∗(t)Yt , t = 1,2, · · · ,T , t∗ = 1,2, · · · ,T ,and Y t∗= (Y t∗1 ,Yt∗2 , · · · ,Y t∗T )′.3. X t∗t ←√W t∗(t)Xt , t = 1,2, · · · ,T , t∗ = 1,2, · · · ,T ,and X t∗= (X t∗1′,X t∗2′, · · · ,X t∗T′)′.Step 2: Constructing the objective function.1. Y˜ ← (Y 1 ′ ,Y 2 ′ , · · · ,Y T ′)′ .2. X˜ ← diag(X1,X2, · · · ,XT ).3. Construct a sparse matrix C ∈ R(T−1)K×T K as,C((T −1)∗ (i−1)+ j,K ∗ ( j−1)+ i)←−1,C((T −1)∗ (i−1)+ j,K ∗ j+ i)← 1,i = 1,2, · · · ,K, j = 1,2, · · · ,T −1.4. Formulate the objective function as in Eq . (4.6) byinputting Y˜ , X˜ , β˜ , C, λ , γ .Step 3: Estimating time-varying coefficients β˜ = (β 1′,β 2′, · · · ,β T )′′.Apply SPG toolbox [25] to solve the optimization problem in Eq. (4.6)and obtain the estimatedtime varying coefficients β˜ .Output: Time varying coefficients β˜ = (β 1′,β 2′, · · · ,β T )′′.parameter as described in [26, 75]. It sequentially applies three line searches alongeach descent direction to minimize the corresponding mean square error of currentCV until the error convergences. For the static sparse regression model used inthe simulation part, we utilize the stability selection approach which is proved toenhance selection accuracy [101].4.2.3 Statistical AnalysisTo perform inference of brain connectivity networks, we utilize a linear regressionapproach which has been described in Section 3.2.3. Generally speaking, we treateach ROI in turn as the response vector and all other ROIs as constituting the61predictor matrix. In this way, the time varying coefficient vector for each ROI isestimated one by one until we obtain the whole brain network.To quantify and compare the temporal variability of the inferred networks, wedefine the network variation as,V =1T −1T∑t=2‖G(t)−G(t−1)‖2F , (4.7)where t represents the time index and G(t) represents the brain connectivity net-work which is a matrix estimated using SWTV model as described before at timepoint t. Network variation calculates the average of distance between two brainconnectivity networks at adjacent time points. It quantifies the changes of networkstructures as well as connectivity strengths. This metric measures the ability ofswitching or oscillation of the networks across the time. While a relatively sim-ple measure, we found V an intuitive way to quantify and compare the temporalchanges of brain connectivity networks: with a fixed γ , a higher V implies highermoment-to-moment variability in the networks (Fig. 4.5).4.3 SimulationsTo validate the proposed method, we performed simulations to compare the per-formance of SWTV model with both that of the weighted time varying modeland a static sparse regression model. We considered different simulation settingswhere the different variables changed in the same time scale with and without auto-correlation, and also when they changed in different time scales.In brief, the simulated data were generated from a Gaussian model with chang-ing structures and coefficients as Yt = Xtβ t + et . Xt was a randomly generatedsample row at time point t with K variables (i.e., a 1×K row vector), β t was atime dependent coefficient vector with same length K (K = 20) and et was whiteGaussian noise.More specifically:(1) We first generated the changing coefficients β t . In the first and secondsimulations, we assumed that all the variables changed in the same time scale asN and the total length of sample size was T = 3×N (an example of this is shown62(a) (b)(c) (d)Figure 4.1: Results for the first simulation. (a). The true Model. (b)-(d).Models learned by static LASSO, Weighted Time Varying model, andSticky Weighted Time Varying model respectively in the first simula-tion. The time index is along the x-axis, the variable index is along they-axis and the color bar represents the coefficients’ strength.in Fig. 4.1 (a)). In the third simulation, different coefficients could have differenttime scales as shown in Fig. 4.3 (a). The averaged time scale and sample size wereset to N and T = 3×N respectively.(2) The design matrix X was randomly generated containing T observationsand K predictors. The error vector e was Gaussian noise ∼ N(0,1). The responsevector Y was generated by Yt = Xtβ t + et with t = 1, · · · ,T . In the second simula-tion, to generate the autocorrelation structures on data, the Gaussian smooth filter63(a) (b)(c) (d)Figure 4.2: Results for the second simulation. (a). The true Model. (b)-(d). Models learned by static LASSO, Weighted Time Varying model,and Sticky Weighted Time Varying model respectively in the secondsimulation. The time index is along the x-axis, the variable index isalong the y-axis and the color bar represents the coefficients’ strength.with variance 1 was applied on X and Y separately.We compared the proposed SWTV model with the weighted time varyingmodel [133] and the static LASSO model. The cross validation was used for pa-rameter selection in SWTV and WTV models as discussed before, and stabilityselection was used for static LASSO model.In the simulations, we tested the performance of the algorithms as a functionof the number of time scales N. For reliable assessment, each procedure was re-64(a) (b)(c) (d)Figure 4.3: Results for the third simulation. (a). The true Model. (b)-(d).Models learned by static LASSO, Weighted Time Varying model, andSticky Weighted Time Varying model respectively. The time index isalong the x-axis, the variable index is along the y-axis and the color barrepresents the coefficients’ strength.peated fifty times and the averaged performance of the different algorithms werecompared. Examples of results for three simulations were shown in Fig. 4.1, Fig.4.2 and Fig. 4.3. The true model, and models learned by static LASSO, WTV andSWTV methods were compared. The time index was along the x-axis, the variableindex was along the y-axis and the color bar represented the coefficients’ strength.The results demonstrated that the variables had both slowly changing coefficientsas well as abrupt changing structures along the time. Note that while the smallest65(a) (b)(c)Figure 4.4: Simulation Results. (a) F1 scores of the first simulation. (b) F1scores of the second simulation. (c) F1 scores of the third simulation.Red lines represent the F1 scores of the proposed method, blue linesrepresent the F1 scores of weighted time varying model and the greenlines represent the F1 scores of the static model.time scale was N, some coefficients may remain constant over a longer epoch suchas variable 20 in Fig. 4.1 (a). As expected, the recovered coefficients by the staticmodel were indeed determined in part by the critical samples, and larger fluctua-tions were observed in the estimated coefficients in WTV model when comparedwith those of SWTV model in all the simulations.The F1 score was employed to quantitatively evaluate the general performanceby considering both the Type I and Type II error rates.66As shown in Fig. 4.4, we compared F1 scores at the time scale (or averagedtime scale) N = 15,20,30,50 and 70 respectively. The results demonstrated thatwith the increasing of time scales, the proposed SWTV and WTV models had bet-ter accuracy in recovering time varying structures. When the time scale was smallerthan 15 time points, it may be unreliable to estimate the true changing structures.Compared with the other two methods, the simulation results demonstrated thatthe proposed SWTV model yield higher accuracy in recovering time-dependentstructures in all the simulations. After adding the autocorrelation structures intothe data in the second simulation, we observed that the accuracy of recovering theunderlying time varying structures decreased compared with those of data with-out autocorrelation. In the third simulation, although the averaged time scale wasfixed, the structures may change within a shorter time period. As a result, the F1scores were lower compared with those of first simulation. By having the capabil-ity of estimating both smoothly changing and abrupt changes, the proposed SWTVmodel more accurately estimated the underlying time-varying brain connectivitypatterns.4.4 Real ApplicationIn this section, we apply the proposed method to a real resting state fMRI data setand study the dynamic properties of brain connectivity networks in subjects withParkinson’s Disease (PD). We first estimate the time varying brain connectivitynetworks for each subject and then compare the temporal and spatial patterns ofthe inferred connectivity networks.4.4.1 Subjects and fMRI Resting State Data SetTwelve PD subjects and ten healthy control subjects were recruited from PacificParkinson’s Research Center (PPRC) at the University of British Columbia (UBC).All the experiments were approved by the Ethics Board at UBC, and all the subjectshave provided informed consent prior to experiment participation.A 3 Tesla scanner (Philips Gyroscan Intera 3.0T; Philips Medical Systems,Netherlands) equipped with a head-coil was used to collect data in the restingstate. Before scanning, all the subjects were instructed to lie on their back in the67Table 4.2: The index and name of 48 selected brain ROIs. ’L’ represents thebrain left side and ’R’ represents the brain right side.Index Name Index NameL1 L-Cerebellum R1 R-CerebellumL2 L-PMd R2 R-PMdL3 L-PMv R3 R-PMvL4 L-Pre-SMA R4 R-Pre-SMAL5 L-SMA-proper R5 R-SMA-properL6 L-ACC R6 R-ACCL7 L-Caudate R7 R-CaudateL8 L-Cerebellum-Cortex R8 R-Cerebellum-CortexL9 L-PFC R9 R-PFCL10 L-Pallidum R10 R-PallidumL11 L-Putamen R11 R-PutamenL12 L-Somatosensory R12 R-SomatosensoryL13 L-Thalamus-Proper R13 R-Thalamus-ProperL14 ctx-L-caudalmiddle- R14 ctx-R-caudalmiddle--frontal -frontalL15 ctx-L-cuneus R15 ctx-R-cuneusL16 ctx-L-inferiorparietal R16 ctx-R-inferiorparietalL17 ctx-L-inferiortemporal R17 ctx-R-inferiortemporalL18 ctx-L-lateraloccipital R18 ctx-R-lateraloccipitalL19 ctx-L-middletemporal R19 ctx-R-middletemporalL20 ctx-L-precentral R20 ctx-R-precentralL21 ctx-L-precuneus R21 ctx-R-precuneusL22 ctx-L-superiorparietal R22 ctx-R-superiorparietalL23 ctx-L-superiortemporal R23 ctx-R-superiortemporalL24 ctx-L-supramarginal R24 ctx-R-supramarginalscanner and have several minutes to acclimatize themselves to the scanner envi-ronment with eyes closed. Blood oxygenation level-dependent (BOLD) contrastecho-planar (EPI) T2*-weighted images were taken with the following specifica-tions with a repetition time of 1985 ms, echo time of 37 ms, flip angle 90◦, fieldof view (FOV) 240.00 mm, matrix size 128×128, with pixel size 1.9 mm×1.9mm. The SENSE acceleration was used in EPI acquisition. The duration of eachfunctional run was 4 mins during which we obtained 36 axial slices with 3 mmthickness and 1 mm gap thickness. The FOV was set to include the cerebellum68ventrally and also include the dorsal surface of the brain. 48 Freesurfer-derivedROIs in total were chosen in this study as shown in Table. ResultsTo apply the proposed method on the subjects with PD and control subjects, weneed to choose the band width h, the sparse penalty parameter λ and the fusedpenalty parameter γ in the SWTV model. We conducted parameter selection usinggradient descent approach for each subject. The optimal parameters for each sub-ject varied across a broad range of values (h= 38.9091s±26.1769s,λ = 0.5117±0.4382,γ = 1.5341± 0.8308). Using the optimal values, the density of learnedconnectivity networks varied from 0.0294 to 0.3326, making it difficult to com-pare two groups. To fairly compare the connectivity patterns of patient and controlgroups, we chose fixed parameters/densities for all the subjects in this study.Although a few studies have been conducted on time variation in connectivitynetworks, the exact time scale of brain activities is unclear, and varies betweensubjects. This is especially true in resting state studies where subjects are asked tolie quietly and not think of anything in particular, so the exact temporal patternsof brain activity may vary across the population. Similar to the choice of slidingwindow length, a small bandwidth h will suffer from large fluctuations while alarge band width h may reduce sensitivity to fluctuations in the signal. Followingprior work [61] as well as our preliminary studies, we set the bandwidth to 32s (16points). A comprehensive comparisons of brain connectivity variation scales willbe conducted in future work.Fig. 4.5 (a) demonstrates the relationship between the averaged number of con-nections and the sparsity parameter λ as applied to one control subject. Supposeλ0 = 0,λ1 = 0.1,λ2 = 0.2, · · · ,λ20 = 2, it is apparent that the averaged numberof connections decreases with sparsity parameter λ increasing. We compare theaveraged common connections between the inferred networks with λi and λi−1(i = 2, · · · ,20), as shown in Fig. 4.5 (a). Reassuringly, we observed that the con-nectivity inferred with a larger λ is mostly contained in the estimated networkswith a smaller λ . In other words, important connections will always be selected.In our study, we learned the networks with a fixed sparsity parameter λ of 0.5 for69(a)(b)Figure 4.5: (a) The averaged number of detected connections within the net-works as sparsity parameter λ increases, with a fixed fused penalty pa-rameter γ = 0.5. The blue line represents the averaged number of de-tected connections as a function of λ . The red dashed line representsthe averaged number of common connections with network detected bya smaller λ . E.g., the first point of the red line is the number of aver-aged common connections between networks with λ = 0.1 and λ = 0.(b) The network variations of networks as fused penalty parameter γincreases, with a fixed λ = 0.5.70a fair comparison at the population level. We also compared the temporal patternswith fixed sparsity (0.1) across all the subjects. Fig. 4.5 (b) demonstrates the re-lationship between the value of network variation and the fused penalty parameterγ when applied to one control subject. The network variation generally decreaseswith increasing values of γ . However, since we are interested in the relative differ-ences between control and patient groups, we set γ = 1.5 for all subjects. We alsocompared the connectivity networks between two groups when γ = 0.5.Fig. 4.6 and Fig. 4.7 demonstrate the examples of time varying brain connec-tivity networks of typical normal and PD subjects at different time points wherethe networks are learned with fixed parameters h = 32s,λ = 0.5,γ = 1.5. The pro-posed method could estimate the brain connectivity networks with both changingstructures and coefficients. When compared with the normal subject, we note thatthe PD subject shows a sparser network. In addition, the PD subject has moredistributed connections while the normal control subject tends to incorporate morehub regions in brain connectivity networks.To measure temporal properties of the learned time varying brain connectivitynetworks, we compared the network variations between control and PD groupsin Fig. 4.8. The averaged network variations were significantly lower in the PDgroup, whether or not the sparsity parameter λ = 0.5 (Fig. 4.8(a)) or fixed sparsity(0.1) (Fig. 4.8(b)). Fig. 4.9 compares the averaged time period, defined as theduration of non-zero values, between control and PD groups. We note that thePD group has a larger time period with different parameters compared with that ofcontrol group. If we consider “switching ratio”, defined as number of time pointswith switching from zero to non-zero states to the total length of time points, wenote that the PD group had a significantly smaller switching ratio as shown in Fig.4.10.We have also investigated the spectrum properties of the inferred brain con-nectivity networks. We note that the most dominant low frequency connectivityfluctuations are below 0.02 Hz, and specifically at around 0.005 and 0.015 Hz,which are consistent with previous studies [1]. While we found no significant dif-ferences between groups in the mean frequency, we suspect this could be due tothe relatively small subject size in our study.In addition to the temporal dynamics, we also studied the spatial patterns learned71by the fixed sparsity (0.1) by examining “consistent” connections over time. Wedefine consistent connections as those connections that appear at least once at onetime point in all subjects within a given group. As shown in Fig. 4.11, the PD grouphas fewer cortico-basal ganglia connections and more cortico-cortical connectionscompared to the control group. The alterations in cortico-cortical and cortico-basalconnectivity may reflect compensatory connections to ameliorate the effects of thediseased basal ganglia [109–111].4.5 Conclusion and DiscussionIt is clear that the brain is inherently non-stationary. Therefore, studying dynamicproperties of brain connectivity networks could extend our understanding of brainfunctioning. In this chapter, a penalized weighted regression model is presented toestimate both smooth and abrupt changes in the time dependent brain connectivitypatterns. Compared with previous multivariate time varying approaches introducedfor fMRI brain connectivity modeling, the proposed SWTV model is more flexibleand allows different pairs of brain regions to have different dynamic time scales.While the proposed method is designed for the time evolving networks estimation,when the underlying models are static, the proposed method could still accuratelyestimate networks with appropriate parameters.When applied to real fMRI resting state data consisting of 12 subjects with PDand 10 control subjects, PD subjects had significantly reduced network variation,likely related to impaired cognitive flexibility in PD. This highlights the importanceof establishing dynamic properties in PD subjects.While the proposed method appears promising, there are a number of limita-tions. The synthetic time courses generated in the simulations could be more realis-tic by embedding the long memory component using the fractional Gaussian noisemodel [30, 31, 63]. We had to estimate certain parameters, such as sparsity andtemporal bandwidth. Further work will be required to more comprehensively in-vestigate time varying brain connectivity patterns over a broad range of time scales.Resting state data is particularly challenging in this regard, as different subjects willundoubtedly have different temporal patterns. We used a very simple metric to es-timate temporal variability of the network, this could be expanded in future work.72A previous study has suggested that larger brain regions tend to show greater con-nectivity variability, while the smaller regions are more stable [68]. Nevertheless,the disease related changes of the time-varying patterns in brain connectivity suchas we observed might be the potential biomarker for future studies [72, 85].73(a) (b)(c)Figure 4.6: Time varying brain connectivity networks learned with fixed pa-rameters (h= 32s,λ = 0.5,γ = 1.5) for one typical control subject at (a)t = 70s (35 points), (b) t = 80s (40 points) and (c) t=90s (45 points). Theblue lines and red lines represent the positive and negative coefficientsrespectively. The thicknesses of the lines represent the absolute valuesof the coefficients.74(a) (b)(c)Figure 4.7: Time varying brain connectivity networks learned with fixed pa-rameters (h = 32s,λ = 0.5,γ = 1.5) for one typical PD subject at (a) t= 70s (35 points), (b) t = 80s (40 points) and (c) t=90s (45 points). Theblue lines and red lines represent the positive and negative coefficientsrespectively. The thicknesses of the lines represent the absolute valuesof the coefficients.75(a) (b)Figure 4.8: The comparison of network variation between the normal andPD group with either (a) fixed sparsity penalty parameter (λ = 0.5) or(b) fixed sparsity (0.1).(a) (b)Figure 4.9: The comparison of averaged time period between the normal andPD group with either (a) fixed sparsity penalty parameter (λ = 0.5) or(b) fixed sparsity (0.1).76(a) (b)Figure 4.10: The comparison of switching ratio between the normal and PDgroup with either (a) fixed sparsity penalty parameter (λ = 0.5) or (b)fixed sparsity (0.1).77(a)(b)Figure 4.11: Connections that consistently appear in at least one time pointin all subjects in (a) the control group and (b) the PD group.78Chapter 5A Combined Static and DynamicModel for Resting State BrainConnectivity Estimation5.1 IntroductionAs the brain is inherently non-stationary, assessing the temporal dynamics of brainconnectivity patterns, represents an additional dimension through which to gaindeeper insights into brain activity [17, 68].The dynamics of brain connectivity networks are particularly important as theyare associated with a variety of neurodisorders such as Schizophrenia [124], mul-tiple sclerosis [80], Parkinson’s Disease [90] and post-traumatic stress disorder[85]. For instance, altered contributions of brain connectivity dynamic patternshave been reported in subjects with multiple sclerosis [80]. The network variationsof subjects with Parkinson’s Disease was decreasing compared with that of con-trol subjects, an observation that may be related with the cognitive rigidity – i.e.difficulty between switching tasks – that is frequently observed in PD [90].A few studies have investigated brain connectivity dynamics [7, 42, 60, 77,124] and have demonstrated that connectivity can be mediated by learning and/ortask performance [7, 42, 124]. In addition to task-related connectivity changes, var-79ious groups have assessed dynamic changes during resting state fMRI. Frequently,the assumption is made that the brain networks change slowly and smoothly withtime, so a sliding window based approach is used [1, 49, 61, 67, 72, 123, 140]. Analternative to assume smoothly changing connectivity is that the brain states arerelatively stable between a few critical time points [92]. These critical time pointsthus can be used to segment the entire brain signals into quasi-stationary sectionsfor the purposes of brain connectivity estimation [37, 86, 92].It is becoming increasingly apparent that neither a continually changing norquasi-stationary model is adequate for brain connectivity modelling. For exam-ple, when studying functional coupling between brain regions across different taskstates, it is found that central tendencies dominate the coupling configurations,though different task states show significant differences in brain connectivity pat-terns [77]. Similar conclusions have been made when comparing the network pat-terns in resting and different task states: A similar intrinsic architecture is presentacross the tasks and resting state [32]. Thus the brain very likely maintains rela-tively stable connections with superimposed flexible alterations. The central dom-inant architecture may be due to the underlying brain structures, and should playkey roles in executing intrinsic functions of the brain [77].Therefore, in this chapter, we propose a Combined static and dynamic model(CSDM) for brain connectivity estimation. Instead of only investigating the staticor dynamic features separately, we aim to jointly make inference of time-invariantand time-dependent connectivity patterns. The proposed combined approach isinspired by the so-called Dirty model (DM) which was originally developed formultitask learning [70]. Multitask learning is statistically effective when recover-ing the related features across different tasks jointly. For instance, the group Lassoapproach has been used to estimate brain connectivity networks at the group level[154]. Different from group Lasso which only encourages the shared features, dirtymodel is able to recover the individual task dependent features in addition to theshared common features. It is more realistic and outperforms both Lasso and groupLasso models [70].Though being a powerful method, DM has never been studied for time-varyingbrain connectivity network modeling. Therefore in this chapter, we introduce thedirty model to the brain network modelling area and extend it to the time-varying80settings based on the sliding window framework. In our formulation, each timewindow is treated as one task and we simultaneously make inference of all timewindows (multitask learning). It can model the temporal brain connectivity pat-terns by jointly estimating the time invariant and time dependent connections. Inour proposed approach, the multitask learning model mainly serves as the featureselection tool in the time-varying setting. With the selected static and dynamicvariables, a least square model is further adopted to better estimate the coefficients.In the remainder of this chapter, we will present the combined static and dy-namic model and discuss the interpretations of the learned temporal brain connec-tivity networks in Section 5.2. The proposed approach is compared with the Lassoand group Lasso models in the time-varying setting in Section 5.3. In Section 5.4,the proposed CSDM approach is applied to a resting state fMRI study in PD. Thedifferent roles of the static and dynamic connectivity features will be discussed.5.2 MethodsThe multivariate linear regression model is one of the most commonly used meth-ods in brain connectivity network modeling as we discussed in Section 3.2. Withthe extension to the multitask setting, multiple data sets could be simultaneouslyinferred. It allows multiple tasks to share some common features such as sparsity,yet still fit the individual multivariate data well. In this section, multivariate regres-sion models will be first briefly described, and we then will focus on the proposedcombined static and dynamic model for resting state temporal brain connectivitymodeling.5.2.1 Combined Static and Dynamic Brain Connectivity NetworkEstimationIn the regression model used for brain connectivity network estimation, the fMRItime course of one brain Region of Interest (ROI) is regarded as the response vector,and the time courses of all other ROIs are treated as predictor variables,Y = Xβ + e, (5.1)where the response variable Y denotes the time course of one brain ROI with sam-81ple length T , X is predictor matrix with dimension T×K based on the time coursesof all other ROIs, β is the coefficient vector and e is the Gaussian noise term. Toinfer time varying connectivity, the time dependent regression model should be es-timated at each time point. However, with only one sample point, the estimatorwould be extremely unstable. Thus in order to estimate time-varying structures/-coefficients, the weighted regression and sliding window strategies are usually em-ployed with the assumption that the underlying connectivity networks are changingslowly and smoothly across the time. In this chapter, we adopt the sliding windowapproach. Suppose that the window length is L and the step size of moving thewindow is one, the sliding window based time dependent regression model can berepresented as,Yt = Xtβt + et , (5.2)where t represents the window index and we need to estimate the regression coef-ficient vector within each window respectively. Yt is the response vector at windowt with length L and Xt is L×K predictor matrix. The total number of windows isM = T −L+1. βt is the coefficient vector we need to make inference at window t.Determining the window length L is challenging. Too large a window may re-sult in insensitivity to possible important temporal features. While with too shorta window, the estimated connectivity may suffer from large fluctuations and infer-ence difficulties. Simultaneously modeling the regression models of all windowsmay improve statistical efficiency and the multitask learning model is suitable tojointly make inference of all windows.Another realistic assumption is that brain networks will maintain certain time-invariant connectivity patterns while still allow specific transient features. Onesolution to the overlapped feature selection in the multitask setting is the dirtymodel which leverages the constraint on the common features across the tasks [70].When extending to the time varying setting, we can denote the model as,Yt = XtS+XtDt + et , (5.3)where S, which is time-invariant, represents the static coefficients, Dt means thetemporal dynamic coefficients and the combined coefficients are denoted as Bt =82S+Dt .The objective function for inferring S and Dt is formulated as,minimizeS,Dt ,t∈RM12LM∑t=1‖Yt −Xt(S+Dt)‖22+λ‖S‖1,∞+ γ‖Dt‖1,1,(5.4)where Yt denotes the time course of one brain ROI and Xt is predictor matrix basedon the time courses of all other ROIs at window t (t = 1,2, · · · ,M, with M beingthe total number of windows based on M = T − L+ 1, T is the sample lengthand L is the window length). λ is the tuning parameter for controlling the staticcoefficients’ sparsity and γ is the tuning parameter to constrain the time-dependentcoefficients’ sparsity. Suppose E is a vector, C is a matrix and C( j) is jth row ofC. ‖E‖22 is defined as the sum of square values of elements in E. ‖C‖1,1 is definedas the sum of absolute values of elements in C. ‖C‖1,∞ = ∑ j |C( j)|∞ and |C( j)|∞is defined as the maximum value in row C( j). This optimization problem can besolved using the accelerated gradient methods (AGM) implemented in MALSAR(multi-task learning via structural regularization) toolbox [159].However, in real applications, the penalty could substantially shrink the coeffi-cients and lead to imprecise estimators. Therefore the above multitasking learningmodel mainly serves as the feature selection tool in our proposed approach. Withthe selected variables in Eqn. 5.4, the least square model is further adopted to bet-ter estimate the coefficients. Suppose the selected variables for static and dynamicfeatures are XSt and XDt respectively. The objective function of the least squareestimation is formulated as,f (S,D) =M∑t=1∥∥Yt −XSt S−XDt Dt∥∥22 (5.5)where XSt and XDt are the predictor matrices at window index t for the static anddynamic variables respectively.The derivations with regard to S and Dt are,83∂ f∂S=−2M∑t=1XSt′Yt +2M∑t=1XSt′XDt Dt+2M∑t=1XSt′XSt S(5.6)∂ f∂Dt=−2XDt ′Yt +2XDt ′XSt S+2XDt ′XDt Dt (5.7)Setting Eqn. 5.6 and Eqn. 5.7 to be zeros, we could get the static coefficients Sand time dependent coefficients Dt as,S =(M∑t=1XSt′ZtXSt)−1( M∑t=1XSt′ZtYt)(5.8)Dt = (XDt′XDt )−1XDt′Yt−(XDt′XDt )−1XDt′XSt S(5.9)where Zt = IL−XDt (XDt ′XDt )−1XDt′ and IL is the L× L identity matrix. The im-plementation of the combined static and dynamic model is summarized in Table5.1.To perform inference of brain connectivity networks, we utilize a linear regres-sion approach. We treat each ROI in turn as the response vector and the signalsfrom all other ROIs as the predictor matrix. The combined static and dynamicmodel as described in Table 5.1 will be employed to infer the coefficients and thecorresponding estimated coefficient vector would give the strength of connectivityfrom all other ROIs to the response ROI, i.e., the estimated directed connections. Inthis way, the temporal coefficient vector for each ROI is estimated one by one untilwe obtain the whole brain network. The tuning parameters λ and γ could be de-termined according to the Bayesian Information Criterion or pre-specified networksparsity. However, in this study, we would either fix the parameters or densities fora fair group comparison.84Table 5.1: Implementation of combined static and dynamic model for timevarying coefficients estimation.Input: Y ∈ RT , X ∈ RT×K , regularization parameters λ , γand window length L.Step 1: Defining the response vector Yt and predictor matrix Xtat each time window, t = 1,2, . . . ,M, M = T −L+1.Step 2: Selecting the static and dynamic features XS, XD accordingto Eqn. 5.4.minimizeS,Dt ,t∈RM12L ∑Mt=1 ‖Yt −Xt(S+Dt)‖22+λ‖S‖1,∞+ γ‖Dt‖1,1.Step 3: Based on the selected static and dynamic features,estimating coefficients according to Equ. 5.8 and Equ. 5.9.S =(∑Mt=1 XSt′ZtXSt)−1(∑Mt=1 XSt′ZtYt),Dt = (XDt′XDt )−1XDt′Yt − (XDt ′XDt )−1XDt′XSt S,t = 1,2, · · · ,M, Zt = IL−XDt (XDt ′XDt )−1XDt′and IL is L×L identity matrix.Output: Static and dynamic coefficients.5.2.2 Eigenconnectivity Network ExtractionInterpreting the learned group/dynamic brain connectivity networks is challengingdue to the temporal-spatial complexity of the connectivity patterns. To facilitatesuch interpretations, in this study, eigenconnectivity networks are extracted basedon the concatenated static and dynamic connectivity networks using the decompo-sition approach [80]. They serve as the representative spatial connectivity patterns,which can then be compared across groups.In order to estimate the eigenconnectivity networks, we first vectorize and con-catenate the dynamic connectivity networks. Suppose the number of ROIs is K, thelearned connectivity network for one subject at one window is a square matrix withsize K×K which can be converted to a vector with length K(K−1) (the diagonalelements are all zeros, and thus can be ignored). Suppose the number of controlsubjects is P1, the number of PD subjects is P2, the total number of subjects isP = P1+P2, and the number of windows for each subjects is M. Then the vectorsfor all the subjects across all time windows are concatenated to form an observationmatrix O with size K(K−1)×PM.85Then Principal Component Analysis (PCA), one popular blind source sepa-ration method, is applied to the formed observation matrix. It assumes that theobservations are the mixtures of the underlying orthogonal sources and PCA triesto project the original data to a space with uncorrelated variables. With applyingPCA, we haveO = A×W ′, (5.10)where A contains the principal components, each column yielding the eigencon-nectivity after reshaping to the K×K matrix. W is the loading weight matrix witheach column corresponding to one component. The elements of W denote the con-tributions to the estimated eigenconnectivity. To estimate A and W , Singular valuedecomposition (SVD) is commonly employed. In real applications, we are ofteninterested in a small number of eigenconnectivities which are able to explain mostof the variations in the group/dynamic observation matrix. More details could befound in [80].In this study, we estimate the eigenconnectivity networks using the static anddynamic connections respectively. We investigate the corresponding eigenconnec-tivities that contain 75% variance, and we further compare the differences of theaveraged contributions between control and PD groups.5.2.3 Dynamic Feature ExtractionTo further compare the temporal variability of the estimated networks, the networkvariation is adopted as described in Section 4.2.3,V =1M−1M∑t=2‖GD(t)−GD(t−1)‖2F , (5.11)where M is the number of windows, t represents the window index and GD(t) rep-resents the dynamic brain connectivity network which is a matrix estimated usingthe proposed approach as described in Section 5.2.1. Network variation calculatesthe average distance between two brain connectivity networks at adjacent time win-dows which is an efficient way to measure the variability of the networks acrossthe time. In our real data application, we will compare network variations between86different groups.5.3 SimulationsTo validate the proposed method, we performed simulations on the combined staticand dynamic model and compared its performance with that of other regressionapproaches. In [70], it has been shown that the dirty model outperformed boththe Lasso and group Lasso models. Here we focus on the estimation performanceimprovement of the proposed approach in the time-varying setting.The data were generated from a Gaussian model with static and dynamic struc-tures as Yt = Xt(S+Dt)+ et where t represented the time index. S was the timeinvariant coefficients and Dt was the time dependent coefficients. In detail, we firstrandomly generated the static coefficients S and the time varying coefficients Dtwith P variables. The static features S remained the same at all time points andthe time dependent coefficients Dt changed with the time. More specifically, thenon-zero dynamic coefficients were assumed to be present for several successivetime points which was the time scale of the changing coefficients (as shown in Fig.5.1 (b)). In the simulation, the time scale of changing coefficients was N and thetotal sample length was T = 3×N. The design matrix X was randomly generated,containing T observations and P variables. The response vector Y was calculatedby Yt = Xt(S+Dt)+ et with t = 1, · · · ,T , and et was the Gaussian white noise.In the first simulation, P was 50, the sparsity of the coefficients was set to be0.16, and the time scale N was 50. We compared the performance of differentmethods with changing static feature ratio r (the ratio of static features to all non-zero features) from 0.3 to 0.8. One example of the generated model was shown inFig. 5.1 (a)-(c). The second simulation tested the performance of the algorithms asa function of the number of time scales N. P was set to be 20, the sparsity of thecoefficients was 0.25, and the static feature ratio was 0.6.We compared the estimation performance of the proposed approach with thestatic lasso (SL) [141] time varying group lasso (TVGL) and time varying lasso(TVL) models. TVGL and TVL extended the original group lasso [157] and lassomodel [141] to the time-varying setting based on the sliding window approachas described in Section 5.2. The SL model assumes that the coefficients are the87(a) (b) (c)(d) (e) (f)(g) (h) (i)Figure 5.1: Simulation example. (a). True static coefficients. (b). Truedynamic coefficients. (c). True combined coefficients. (d). Estimatedstatic coefficients using the proposed CSDM. (e). Estimated dynamiccoefficients using CSDM. (f). Estimated combined coefficients usingCSDM. (g). Estimated coefficients using the static lasso model. (h).Estimated coefficients using the time varying group lasso model. (i).Estimated coefficients using the time varying lasso model. The timeindex is along the x-axis, the variable index is along the y-axis and thecolor bar represents coefficients’ strength.same across the time and all time points are used together to estimate the sparsecoefficients. TVGL assumes that the temporal features are similar across the timeand applies the group estimation to all windows. Finally, the TVL model estimates88(a)(b)Figure 5.2: The comparison of the L2-loss of the estimated coefficients for(a). simulation 1 and (b). simulation 2. Red, black, green, pink linesrepresent the proposed CSDM, static Lasso [141] time varying groupLasso (modified based on [157]) and time varying Lasso (modifiedbased on [141] ) respectively.the coefficients at each time window separately. The parameters were selectedaccording to the sparsity of the estimated structures for all models.One example of the estimated results of different approaches in the first simu-lation was shown in Fig. 5.1. It demonstrated that the proposed CSDM was able toestimate the static features precisely and also yielded relatively good estimation ac-89curacy of dynamic coefficients. In the simulation, L2 loss of estimated coefficientsof different models were compared. We tested the performance of the algorithmsas a function of static feature ratio r in the first simulation. We have repeatedeach procedure fifty times and the averaged performance have been compared asshown in Fig. 5.2 (a). It was noted that with the increasing of the static featureratio r, all the models produced better performances. It may due to the fact thatthe static features were contained in all windows while the dynamic features couldonly be detected with limited time epochs. Thus, with the increasing number ofstatic features, all the methods produced higher estimation accuracy. In the secondsimulation as shown in Fig. 5.2 (b), it was noted that the time scale didn’t affect theperformance of static lasso model much. However, the estimation precision of allother methods was improved with increasing number of time scale N. The CSDMmodel always obtained lower estimation loss as shown in Fig. 5.2.It’s worth mentioning that our proposed approach was able to detect the staticfeatures and dynamic features simultaneously which was promising in describingthe mixed static and dynamic models. For other methods, the post-hoc testing wasnecessary to distinguish the static and dynamic features.5.4 Real ApplicationIn this section, we apply the proposed method to a real resting state fMRI data setand examine the static as well as dynamic properties of brain connectivity networksin subjects with Parkinson’s Disease (PD). We first estimate the brain connectivitynetworks for each subject and then compare the temporal patterns of the inferredconnectivity networks. We are interested in exploring the dynamic and static fea-tures in PD study.5.4.1 Subjects and fMRI Resting State Data SetIn our study, twelve subjects with PD and ten age-matched controls were recruited.Resting state fMRI data were collected and PD subjects participated the experimentbefore and after receiving L-dopa medication (denoted as PDon and PDoff). Allexperiments were approved by the appropriate University Ethics board.The parameter settings of data collection was the same with those of the study90Table 5.2: The index and name of 54 selected brain ROIs. ’L’ represents thebrain left side and ’R’ represents the brain right side.ID Name ID Name1 ctx-L-G-frontmiddle 28 ctx-R-G-frontmiddle2 ctx-L-ventral-lat 29 ctx-R-ventral-lat-prefrontalcortex -prefrontalcortex3 ctx-L-insula 30 ctx-R-insula4 ctx-L-superiortemporal 31 ctx-R-superiortemporal5 ctx-L-middletemporal 32 ctx-R-middletemporal6 ctx-L-inferiortemporal 33 ctx-R-inferiortemporal7 ctx-L-parahippocampal 34 ctx-R-parahippocampal8 L-Hippocampus 35 R-Hippocampus9 L-Somatosensory 36 R-Somatosensory10 ctx-L-superiorparietal 37 ctx-R-superiorparietal11 ctx-L-inferiorparietal 38 ctx-R-inferiorparietal12 ctx-L-occipital-parietal 39 ctx-R-occipital-parietal-visual-assoc-area -visual-assoc-area13 ctx-L-lateraloccipital 40 ctx-R-lateraloccipital14 ctx-L-caudalanterior 41 ctx-R-caudalanterior-cingulate -cingulate15 ctx-L-posteriorcingulate 42 ctx-R-posteriorcingulate16 ctx-L-precuneus 43 ctx-R-precuneus17 ctx-L-vent-medial 44 ctx-R-vent-medial-prefrontal-cortex -prefrontal-cortex18 L-Cerebellum 45 R-Cerebellum19 L-M1 46 R-M120 L-SMA-proper 47 R-SMA-proper21 L-Pre-SMA 48 R-Pre-SMA22 L-PMd 49 R-PMd23 L-PMv 50 R-PMv24 L-Thalamus-Proper 51 R-Thalamus-Proper25 L-Caudate 52 R-Caudate26 L-Putamen 53 R-Putamen27 L-Pallidum 54 R-Pallidumin Section 4.4. The duration of each functional run was 8 mins and fifty-fourFreesurfer-derived brain ROIs were chosen to learn the brain connectivity networksas shown in Table 5.2.915.4.2 Results(a) (b)(c) (d)Figure 5.3: Examples of static eigenconnectivities (fixed density). (a)-(d).Static eigenconnectivity network 1, 2, 3, and 11 respectively.To fairly compare the connectivity patterns of PD and control groups, wechoose fixed densities (sparsity = 0.17)/parameters(λ = 1900,γ = 20) for all thesubjects in this study. The length of the sliding window was set to be 128s (64TRs) in order to avoid large fluctuations.Eigenconnectivities are adopted to investigate the spatial patterns of the esti-mated time invariant and time evolving connections. For the fixed sparsity setting,11 static eigenconnectivity networks and 161 dynamic eigenconnectivity networkswere identified respectively, explaining 75% variance (Fig. 5.3 and Fig. 5.4).92(a) (b)(c) (d)Figure 5.4: Examples of dynamic eigenconnectivities (fixed density). (a)-(d). Dynamic eigenconnectivity network 1, 2, 3, and 161 respectively.As shown in Fig. 5.3, the cross-hemisphere bilateral ROIs were strongly main-tained in all the static eigenconnectivities which was consistent with prior medicalknowledge. However, for the dynamic features in Fig. 5.4, the variability of theconnectivity patterns was very large. It was noted that the first dynamic eigen-connectivity had clear bilateral connections (Fig. 5.4(a)) while others had quitedifferent spatial patterns. Similar eigenconnectivity patterns were also identified inthe fixed parameters cases.We further examined the corresponding averaged loading weights, denotingthe group contributions to each eigenconnectivity. An unpaired T test was ap-plied to find the eigenconnectivities with significant different averaged time de-93(a) (b)(c) (d)Figure 5.5: (a) The static eigenconnectivity network 2 (fixed density). (b).The comparison of averaged contributions of control and PD groups tothe static eigenconnectivity network in (a). (c) The static eigenconnec-tivity network 3 (fixed parameter). (d). The comparison of averagedcontributions of control and PD groups to the static eigenconnectivitynetwork in (c).pendent weights between normal and PD (both PD on and off medication) groups.As shown in Fig. 5.5 (a) and (b), for the fixed density case, the time dependentweights for second static eigenconnectivity network was significantly different be-tween control and PD groups (P value = 0.0327). The different contributions fromtwo groups may indicate the connectivity network alterations due to PD. Whencomparing the loading weights of 161 dynamic eigenconnectivity networks, 8 of94(a) (b)(c) (d)Figure 5.6: (a) The dynamic eigenconnectivity network 5 (fixed density).(b). The comparison of averaged contributions of control and PD groupsto the dynamic eigenconnectivity network in (a). (c) The dynamiceigenconnectivity network 5 (fixed parameter). (d). The comparison ofaveraged contributions of control and PD groups to the dynamic eigen-connectivity network in (c).them were found to be significant different between groups. The first significantdifferent dynamic eigenconnectivity network for fixed density case has been shownin Fig. 5.6 (a) and (b).We noted that eigenconnectivity network patterns based on static and dynamicfeatures were quite different. The static connectivity networks preserved the mostrobust connections while the dynamic connectivity patterns may provide insights95(a) (b)Figure 5.7: The comparison of network variations between control, PDonand PDoff groups with either (a) fixed density (density = 0.17) or (b)fixed parameters (λ = 1900,γ = 20).into the network reorganization across the time. When investigating the eigencon-nectivity networks, both of them could provide the important features to distinguishthe control and PD groups.In addition to the spatial patterns, we also studied the network variations be-tween control and PD groups as demonstrated in Fig. 5.7. The averaged networkvariations were significantly lower in the PD groups, in both the fixed density case(Fig. 5.7(a)) and the fixed parameters case (Fig. 5.7(b)). This is consistent withprior observations that PD subjects tend to have ”cognitive inflexibility” and findit difficult to fluidly and rapidly switch between different contingencies such asfollowing an arbitrary set of rules during a task. We also noticed that the averagednetwork variations of PDon group was closer to that of control group which maydemonstrate the beneficial effects of L-dopa medication on Parkinson’s Diseasecognition.5.5 Conclusion and DiscussionStudying dynamic properties of brain connectivity networks can extend our under-standing of brain functioning. It provides a promising way to assess the evolvingof brain organizations. The influence of tasks and external stimulus can also be ex-96plored which is crucial in evaluating the brain during adaption. The brain connec-tivity dynamics also open new avenues to the exploration of the neurodegenerativedisorders. Compared with static connectivity, time varying connectivity patternsmay provide a more sensitive and specific biomarker of disease [17].However, purely assuming that brain fluctuations are static or dynamic maybe oversimplified. Previous studies have suggested that some central couplingsdominating the brain coordination are largely invariant across specific cognitivestates [77]. Relative stable connections may be present with the transient couplingsexisting. Thus, a mixed temporal connectivity model with both the time invariantand time related connections is likely required.In this chapter, a combined static and dynamic brain connectivity network mod-eling approach has been proposed. Compared with previous multivariate time vary-ing approaches introduced for fMRI brain network estimation, our method is moreflexible which is able to identify the time invariant and time dependent connectivityfeatures simultaneously.The proposed method has been applied to a real fMRI resting state data set withParkinson’s Disease. The estimated static and dynamic brain connectivity networksare found to be quite different which represent the complementary information ofthe brain connectivity patterns. In addition, the PD subjects have significantlyreduced network variation, likely related to impaired cognitive flexibility.There are still some challenges in studying the brain dynamics. Although a lotof studies have been conducted on the time dependent brain network estimation,there is no consensus on the underlying brain connectivity patterns. Whether thebrain coupling changes smoothly or suddenly, is still unclear. What is the time scaleof the brain connectivity networks, is still in debate. The multimodality studiessuch as EEG/fMRI combined studies may be crucial in our future work to explorethe connectivity evolving at high temporal resolution [17, 145].While the proposed method appears interesting, there are still a number of lim-itations. We have to select the parameters for the modeling. Although we haveconducted the comparisons of brain connectivity patterns with various windowlengths and parameters, similar conclusions have been made. Further work wouldbe required to more comprehensively investigate time varying brain connectivitypatterns over a broad range of parameter settings. In this chapter, we have adopted97an eigenconnectivity network estimation to compare the static and dynamic con-nectivity patterns. It is a promising way to extract the representative fluctuationpatterns. However, resting state time varying connectivity patterns are particu-larly challenging in interpretation due to its spatial and temporal complexity. Moremeasures may be required to describe the full potential of temporal and spatialinformation which could be expanded in future work.This is our first attempt to model the static and dynamic connectivity patternsin the resting state simultaneously. Jointly estimation of time variant and invariantconnectivity patterns may be important to fully understand the brain robustness andefficiency.98Chapter 6Conclusion and Future Work6.1 ConclusionIn this thesis, we have developed a set of novel network modeling approaches forbrain connectivity estimation using fMRI signals. The proposed algorithms try toaddress several challenges present in real brain connectivity studies including thegroup inference for multiple subjects, error rate control, prior knowledge incorpo-ration and dynamic brain connectivity network modeling. All the proposed meth-ods have been applied to real fMRI datasets, with the ultimate goal to investigatethe alterations in brain connectivity patterns associated with neurological diseases.The results demonstrate that connectivity features may sever as biomarkers for fu-ture studies. The main contributions and findings of this thesis are summarized asfollows.In chapter 2, a false discovery rate controlled, prior knowledge incorporatedgroup level network modeling approach has been introduced by extending the PCfdralgorithm to the group level and imposing the prior knowledge on the networkstructures. The proposed approach was able to control the FDR directly at thegroup level. It was an efficient and reliable group level inference approach. Whenapplied to fMRI data collected from subjects with Parkinson’s Disease on and offL-dopa medication and normal controls performing a motor task, we found robustgroup evidence of disease-related changes, compensatory changes and the normal-izing effect of L-dopa medication. Our proposed approach was able to detect the99reliable group level connectivity features which made the group comparisons pos-sible.When incorporating the prior knowledge from other modality and modeling thediversity in the brain connectivity networks instead of conventionally treating allthe subjects equally, a genetically informed group brain connectivity network mod-eling approach was proposed in Chapter 3. While genetic information was used inour study, other prior information such as clinical indices could also be adoptedto assist the brain connectivity diversity modeling. The proposed framework wasmore flexible in dealing with the inter-subject variability in a population, and itefficiently integrated the information from other modalities which greatly assistedthe final biological interpretations. Using the proposed approach, we demonstratedthe changes in connectivity patterns under the effect of a specific genetic variationand how this was modulated by Schizophrenia.Modeling the evolution of brain connectivity networks along the time is ofgreat importance due to the fact that the brain is inherently non-stationary. Study-ing dynamic properties of brain connectivity could extend our understanding ofbrain functioning. In Chapter 4, a sticky weighted time varying model was devel-oped to investigate the time-dependent brain connectivity networks based on thefused regression model. The proposed method was able to recover both smoothchanging coefficients and abrupt changing structures. In addition, the proposedmethod allowed different pairs of brain regions to exhibit fluctuations at differenttime scales which made the model more flexible. When applied to a real fMRIresting state study in Parkinson’s Disease, PD subjects were found to have signif-icantly reduced network variation, likely related to impaired cognitive flexibilityin PD. The dynamics of brain connectivity patterns have thus provided us moreinsights into the neurological disorders.In Chapter 5, we further leveraged assumption on the dynamic brain connec-tivity networks. Instead of purely assuming static or dynamic connectivity, weproposed learning both time-invariant connections and time-varying coupling pat-terns simultaneously by employing a multitask learning model followed by a leastsquare approach to precisely estimate the connectivity coefficients. It was our firstattempt to jointly model the static and dynamic connectivity patterns in the restingstate. It served as a potential tool to fully describe the brain robustness and flex-100ibility. The proposed method was applied to a real fMRI resting state data set inParkinson’s Disease. The estimated static and dynamic brain connectivity networkswere found to be quite different which represented the complementary informationof the brain connectivity patterns. In addition, the PD subjects had significantlyreduced network variation which was consistent with our previous study.In summary, the major contributions of this thesis work are developing a col-lection of novel brain connectivity modeling approaches that are able to deal withparticular challenges present in real fMRI applications. The group comparisonsbetween subjects in control and disease states based on the proposed group levelinference have examined the effects of L-dopa medication and enhanced the un-derstanding of disease induced changes on brain connectivity patterns. The secondwork expanded the imaging genetic studies and provided a new strategy for priorknowledge incorporation into the brain network modeling. The proposed timevarying brain connectivity modeling approaches in this thesis assessed the brainfunctions in the additional temporal dimension which would be promising in thestudies of neurological disorders. The results suggest that certain connectivity pat-terns could sever as the biomarkers and deserve further investigations in the futurestudies.6.2 Future WorkAlthough a lot of methods have been developed in this area, it’s worth noting thateach method has its own advantages and limitations, and there are still many moreaspects of brain connectivity that need to be further studied. Increasing numberof combined methods have been developed which benefit from each other’s com-plementary advantages. In the following, we will discuss on some possible futuredirections regarding brain connectivity network estimation.6.2.1 Brain Connectivity Transition Patterns EstimationAs in our aforementioned discussions, time-dependent brain connectivity has pro-vided a new perspective on brain adaption and reconfiguration. In particular, thedynamics of brain connectivity may be altered in disease states which indicate theimportance to investigate the time varying brain connectivity networks [80, 90,101124].Most existing approaches examine the temporal changes of intrinsic brain con-nectivity patterns in the resting state. While a few studies have demonstrated thatconnectivity can be mediated by learning and/or task performance [7, 42, 81, 124].Therefore, it would be interesting to investigate the reconfiguration of brain con-nectivity under different cognition states. For instance, the transition patterns fromtask state to resting state may be vital to describe the brain functioning, and poten-tial to investigate a variety of neurological disorders.Considering the success of decomposition methods in the area of static brainconnectivity network modeling, a possible strategy for state dependent brain net-work estimation is HMM-IVA (Hidden Markov Models-Independent Vector Anal-ysis) model which is a joint multidimensional multi-state network estimation ap-proach, aiming to extract the state dependent common sources from several datasets simultaneously. Independent Vector Analysis (IVA) is a joint blind source sep-aration method which has been successfully introduced to simultaneously extractthe underlying common sources across multiple data sets. Combined with HMM,we could extend it to the dynamic settings where connectivity transition patternsunder different conditions are able to identified.6.2.2 Large Scale Brain Connectivity Network ModelingDue to the scarcity of fMRI samples, current graphical models of brain connec-tivity estimation usually involve dozens or hundreds of nodes. However, fMRIsimultaneously measures around 100,000 voxels which have been traditionally av-eraged within each ROI. Designing algorithms which are able to recover networksof thousands of variables will be of great interest for brain connectivity modeling.To recover a large scale brain connectivity networks, the algorithms with highcomputational efficiency are demanded. The high dimension precision matrix in-ference is one possible method for estimating large scale voxel-wise brain interac-tions [29, 93].The multi-scaling approach is another potential strategy to efficiently estimatethe brain connectivity networks hierarchically. The spatial information could beincorporated into the process of network modeling at the coarser and fine levels.102The relationship between small regions contained within certain functional areascould be firstly estimated and the representative signals are extracted at the finescale. Then the interactions between larger ROIs with the representative signalsare studied at a coarser level.6.2.3 Connectivity Based Brain Voxel SelectionTwo straightforward approaches to assess brain connectivity networks are voxel-based and ROI-based methods. Voxel-based approaches usually involve large num-ber of variables. In general, the correlation threshold method is adopted to studythe correlation fMRI maps with thousands of voxels. However, graphical modelswhich are not able to handle a large number of variables favour more the ROI-basedconnectivity modeling approaches with a small number of variables involved. Inaddition, the voxel-based approaches are more sensitive to the noise, which makethe results unstable. As a result, ROI-based approaches are usually adopted. How-ever, we have to carefully define the ROIs with accurate voxels selected. Other-wise, the signal fluctuations within a single ROI may be the result of the influenceof multiple underlying brain networks.The parcellation of brain is a challenging problem. The brain regions couldbe defined based on their anatomical structures or functional specializations [138].However, labelling of ROIs is a time consuming task and the accurate definitionof some region boundaries is still in debate. To relax the requirements of the priorknowledge, several data driven methods have been developed, such as clusteringapproach [36] and seed region growing approach [94].Previous research has demonstrated that the properties of brain connectivitynetworks heavily depend on the parcellation of brain ROIs [152]. For the pur-pose of connectivity estimation, ROI definition approaches should be able to selectfunctional and spatially homogeneous voxels within each region. The connectivitybased ROI parcellation methods have been of great interest. Applying the modular-ity graph measure on the functional connectivity network, Barnes et.al try to detectthe optimal divisions in basal ganglia [5]. Spatially regularized canonical correla-tion analysis is used to define the ROIs as a set of voxels with similar connectivitypatterns to other ROIs [39]. In another interesting study, classifier based approach103is adopted to identify brain ROIs with consistent genotype-phenotype relationships[127].In our preliminary study, the spatial information is taken into consideration inthe brain region definition by a sparse spatially regularized fused lasso regressionmodel. The target brain ROI is defined as the functional consistent and spatiallyadjacent area [158]. We plan to further develop a framework which is able to con-duct whole brain ROIs definition by combining computational-efficient connectiv-ity estimation models with the spatial constraints. To represent each ROI, differentstrategies could be adopted, such as averaging over all voxels, principal componentand other distance measures. It would be interesting to compare the impact of thoserepresenting signals on the properties of brain connectivity networks. In addition,using the temporal information as another criterion to segment the functional re-gions may be a novel way for brain parcellation where the evolutionary clusteringapproach is a potential method.6.2.4 Application to Parkinson’s Disease StudiesParkinson’s Disease is one of the most common movement disorders which is char-acterized by muscle rigidity, tremor, bradykinesia and etc. In addition, the cogni-tive and psychiatric changes often accompany with PD. The high prevalence andserious consequences of PD have enormous impact on the whole population. Thediagnosis of Parkinson’s Disease heavily depends on the medical history and clin-ical symptoms. However, since no lab test clearly identifies PD, diagnosing PD atthe early stage from other parkinsonian diseases at a sufficiently accurate level isstill challenging [2]. As one of the non-invasive neuroimaging technologies, fMRIhas been widely used for studying brain activities in PD. [126].Previous studies have suggested that PD may lead to the alterations of brainconnectivity patterns [4, 18, 155, 156]. For instance, the functional connectivityof the motor network in the resting state was found to be disrupted in PD and re-lated with the severity of the disease [155]. In another study, effective connectivityin resting state was employed to assess the effects of deep bran stimulation anddemonstrated the distributed effects of stimulation on the resting motor networks[73]. Modeling and comparing brain connectivity networks in health and PD could104thus provide a way to evaluate the disease related connectivity abnormalities. Theymay further serve as potential tools for the diagnosis of PD and for quantifying theseverity of PD.We have conducted some preliminary studies on Parkinson’s Disease and foundinteresting results. In the future work, we plan to comprehensively study the brainconnectivity networks in PD, in order to characterize the spatial and temporal prop-erties associated with PD, and to extract the particular connectivity patterns of thepatient population. The multimodal assessment of PD is another potential direc-tion to jointly probe the disease and to strengthen the biomarkers form combinedstudies [98].105Bibliography[1] E. A. Allen, E. Damaraju, S. M. Plis, E. B. Erhardt, T. Eichele, and V. D.Calhoun. Tracking whole-brain connectivity dynamics in the resting state.Cerebral Cortex, 2012. → pages 13, 14, 56, 71, 80[2] G. Alves, E. Forsaa, K. Pedersen, M. Dreetz Gjerstad, and J. Larsen.Epidemiology of parkinson’s disease. Journal of Neurology, 255(0):18–32–32, 2008. ISSN 0340-5354. → pages 104[3] N. C. Andreasen. The endophenotype concept in psychiatry: etymologyand strategic intentions. American Psychiatric Pub, 2005. → pages 29[4] N. Baradaran, S. N. Tan, A. Liu, A. Ashoori, S. J. Palmer, Z. J. Wang,M. M. Oishi, and M. J. McKeown. Parkinsons disease rigidity: relation tobrain connectivity and motor performance. Frontiers in neurology, 4, 2013.→ pages 104[5] K. A. Barnes, A. L. Cohen, J. D. Power, S. M. Nelson, Y. B. Dosenbach,F. M. Miezin, S. E. Petersen, and B. L. Schlaggar. Identifying basal gangliadivisions in individuals using resting-state functional connectivity mri.Frontiers in systems neuroscience, 4, 2010. → pages 103[6] D. S. Bassett and E. Bullmore. Small-world brain networks. Theneuroscientist, 12(6):512–523, 2006. → pages 14[7] D. S. Bassett, N. F. Wymbs, M. A. Porter, P. J. Mucha, J. M. Carlson, andS. T. Grafton. Dynamic reconfiguration of human brain networks duringlearning. Proceedings of the National Academy of Sciences, 108(18):7641–7646, 2011. doi:10.1073/pnas.1018985108. URLhttp://www.pnas.org/content/108/18/7641.abstract. → pages 13, 79, 102[8] D. S. Bassett, B. G. Nelson, B. A. Mueller, J. Camchong, and K. O. Lim.Altered resting state complexity in schizophrenia. NeuroImage, 59(3):2196– 2207, 2012. ISSN 1053-8119. → pages 29, 30106[9] R. Baumgartner, C. Windischberger, and E. Moser. Quantification infunctional magnetic resonance imaging: fuzzy clustering vs. correlationanalysis. Magnetic Resonance Imaging, 16(2):115–125, 1998. → pages 7[10] C. F. Beckmann, C. E. Mackay, N. Filippini, and S. M. Smith. Groupcomparison of resting-state fmri data using multi-subject ica and dualregression. Neuroimage, 47(Suppl 1):S148, 2009. → pages 11[11] Y. Benjamini and D. Yekutieli. The control of the false discovery rate inmultiple testing under dependency. Ann. Stat., 29(4):1165–1188, 2001. →pages 19[12] Y. Benjamini and D. Yekutieli. The control of the false discovery rate inmultiple testing under dependency. Ann. Stat., 29(4):1165–1188, 2001. →pages 25[13] B. Biswal, F. Zerrin Yetkin, V. M. Haughton, and J. S. Hyde. Functionalconnectivity in the motor cortex of resting human brain using echo-planarmri. Magnetic resonance in medicine, 34(4):537–541, 1995. → pages 4[14] B. B. Biswal, M. Mennes, X.-N. Zuo, S. Gohel, C. Kelly, S. M. Smith, C. F.Beckmann, J. S. Adelstein, R. L. Buckner, S. Colcombe, et al. Towarddiscovery science of human brain function. Proceedings of the NationalAcademy of Sciences, 107(10):4734–4739, 2010. → pages 4[15] E. Bullmore and O. Sporns. Complex brain networks: graph theoreticalanalysis of structural and functional systems. Nat Rev Neurosci, 10(3):186–198, Mar. 2009. ISSN 1471-003X. → pages 14, 30, 40, 41[16] V. Calhoun, T. Adali, G. Pearlson, and J. Pekar. A method for makinggroup inferences from functional mri data using independent componentanalysis. Human brain mapping, 14(3):140–151, 2001. → pages 11[17] V. D. Calhoun, R. Miller, G. Pearlson, and T. Adal? The chronnectome:Time-varying connectivity networks as the next frontier in fmri datadiscovery. Neuron, 84(2):262 – 274, 2014. ISSN 0896-6273.doi:http://dx.doi.org/10.1016/j.neuron.2014.10.015. URLhttp://www.sciencedirect.com/science/article/pii/S0896627314009131. →pages 79, 97[18] M. C. Campbell, J. M. Koller, A. Z. Snyder, C. Buddhala, P. T. Kotzbauer,and J. S. Perlmutter. Csf proteins and resting-state functional connectivityin parkinson disease. Neurology, pages 10–1212, 2015. → pages 104107[19] J. Cao and K. Worsley. The geometry of correlation fields with anapplication to functional connectivity of the brain. Ann. Appl. Probab., 9:1021–1057, 1999. → pages 6[20] L. S. Carroll and M. J. Owen. Genetic overlap between autism,schizophrenia and bipolar disorder. Genome Medicine, 1:102.1–102.7,2009. → pages 29[21] C. Chang and G. H. Glover. Time-frequency dynamics of resting-statebrain connectivity measured with fmri. NeuroImage, 50(1):81–98, 2010.→ pages 13, 56[22] J. Chen, Y. Xu, J. Zhang, Z. Liu, C. Xu, K. Zhang, Y. Shen, and Q. Xu. Acombined study of genetic association and brain imaging on the daoa genein schizophrenia. Am. J. Med. Genet., 162B:191–200, 2013. ISSN1552-485X. → pages 44[23] X. Chen, Z. J. Wang, and M. J. McKeown. fmri group studies of brainconnectivity via a group robust lasso. In Image Processing (ICIP), 201017th IEEE International Conference on, pages 589–592. IEEE, 2010. →pages 33[24] X. Chen, Z. J. Wang, and M. J. McKeown. A bayesian lasso viareversible-jump mcmc. Signal Processing, 91(8):1920–1932, 2011. →pages 6, 10[25] X. Chen, Q. Lin, S. Kim, J. G. Carbonell, and E. P. Xing. Smoothingproximal gradient method for general structured sparse regression. TheAnnals of Applied Statistics, 6(2):719–752, 2012. → pages 60, 61[26] X. Chen, X. Shi, X. Xu, Z. Wang, R. Mills, C. Lee, and J. Xu. A two-graphguided multi-task lasso approach for eqtl mapping. In InternationalConference on Artificial Intelligence and Statistics, pages 208–217, 2012.→ pages 33, 35, 54, 61[27] X. Chen, X. Chen, R. Ward, and Z. Wang. A joint multimodal groupanalysis framework for modeling corticomuscular activity. Multimedia,IEEE Transactions on, 15(5):1049–1059, 2013. ISSN 1520-9210.doi:10.1109/TMM.2013.2245319. → pages 52[28] X. Chen, C. He, Z. Wang, and M. McKeown. An ic-pls framework forgroup corticomuscular coupling analysis. Biomedical Engineering, IEEETransactions on, 60(7):2022–2033, 2013. ISSN 0018-9294.doi:10.1109/TBME.2013.2248059. → pages 52108[29] X. Chen, M. Xu, W. B. Wu, et al. Covariance and precision matrixestimation for high-dimensional time series. The Annals of Statistics, 41(6):2994–3021, 2013. → pages 102[30] P. Ciuciu, G. Varoquaux, P. Abry, S. Sadaghiani, and A. Kleinschmidt.Scale-free and multifractal time dynamics of fmri signals during rest andtask. arXiv preprint arXiv:1308.4385, 2013. → pages 72[31] P. Ciuciu, P. Abry, and B. J. He. Interplay between functional connectivityand scale-free dynamics in intrinsic fmri networks. Neuroimage, 95:248–263, 2014. → pages 72[32] M. W. Cole, D. S. Bassett, J. D. Power, T. S. Braver, and S. E. Petersen.Intrinsic and task-evoked network architectures of the human brain.Neuron, 83(1):238 – 251, 2014. ISSN 0896-6273.doi:http://dx.doi.org/10.1016/j.neuron.2014.05.014. URLhttp://www.sciencedirect.com/science/article/pii/S0896627314004000. →pages 80[33] L. Condat. A generic proximal algorithm for convexoptimizationapplication to total variation minimization. Signal ProcessingLetters, IEEE, 21(8):985–989, 2014. → pages 51, 60[34] D. Cordes, V. Haughton, J. D. Carew, K. Arfanakis, and K. Maravilla.Hierarchical clustering to measure connectivity in fmri resting-state data.Magnetic resonance imaging, 20(4):305–317, 2002. → pages 8[35] N. M. Correa, T. Eichele, T. Adali, Y.-O. Li, and V. D. Calhoun. Multi-setcanonical correlation analysis for the fusion of concurrent single trial erpand functional mri. NeuroImage, 50(4):1438 – 1445, 2010. ISSN1053-8119. doi:http://dx.doi.org/10.1016/j.neuroimage.2010.01.062. URLhttp://www.sciencedirect.com/science/article/pii/S1053811910000844. →pages 52[36] R. C. Craddock, G. A. James, P. E. Holtzheimer, X. P. Hu, and H. S.Mayberg. A whole brain fmri atlas generated via spatially constrainedspectral clustering. Human brain mapping, 33(8):1914–1928, 2012. →pages 103[37] I. Cribben, R. Haraldsdottir, L. Y. Atlas, T. D. Wager, and M. A. Lindquist.Dynamic connectivity regression: Determining state-related changes inbrain connectivity. NeuroImage, 61(4):907 – 920, 2012. ISSN 1053-8119.doi:http://dx.doi.org/10.1016/j.neuroimage.2012.03.070. URL109http://www.sciencedirect.com/science/article/pii/S1053811912003515. →pages 13, 56, 80[38] J. Damoiseaux, S. Rombouts, F. Barkhof, P. Scheltens, C. Stam, S. M.Smith, and C. Beckmann. Consistent resting-state networks across healthysubjects. Proceedings of the national academy of sciences, 103(37):13848–13853, 2006. → pages 7[39] F. Deleus and M. M. Van Hulle. A connectivity-based method for definingregions-of-interest in fmri data. Image Processing, IEEE Transactions on,18(8):1760–1771, 2009. → pages 103[40] G. Du, M. M. Lewis, M. Styner, M. L. Shaffer, S. Sen, Q. X. Yang, andX. Huang. Combined r2* and diffusion tensor imaging changes in thesubstantia nigra in parkinson’s disease. Movement Disorders, 26(9):1627–1632, 2011. → pages 2[41] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani. Least angle regression.Ann. Statist., 32:407–499, 2004. → pages 37[42] A. Elton and W. Gao. Task-related modulation of functional connectivityvariability and its behavioral correlations. Human Brain Mapping, 36(8):3260–3272, 2015. ISSN 1097-0193. doi:10.1002/hbm.22847. URLhttp://dx.doi.org/10.1002/hbm.22847. → pages 13, 79, 102[43] F. Esposito, A. Aragri, I. Pesaresi, S. Cirillo, G. Tedeschi, E. Marciano,R. Goebel, and F. Di Salle. Independent component model of thedefault-mode brain function: combining individual-level andpopulation-level analyses in resting-state fmri. Magnetic resonanceimaging, 26(7):905–913, 2008. → pages 7[44] L. Farde, F. A. Wiesel, et al. D2 dopamine receptors in neuroleptic-naiveschizophrenic patients: a positron emission tomography study with [11c]raclopride. Archives of General Psychiatry, 47:213–219, 1990. → pages 52[45] H. Fischer and J. Hennig. Neural network-based analysis of mr time series.Magnetic Resonance in Medicine, 41(1):124–131, 1999. → pages 7[46] M. D. Fox and M. E. Raichle. Spontaneous fluctuations in brain activityobserved with functional magnetic resonance imaging. Nature ReviewsNeuroscience, 8(9):700–711, 2007. → pages 4[47] K. Friston, L. Harrison, and W. Penny. Dynamic causal modelling.NeuroImage, 19(4):1273 – 1302, 2003. → pages 6, 9, 30, 31110[48] K. J. Friston, A. P. Holmes, K. J. Worsley, J. Poline, C. D. Frith, R. S.Frackowiak, et al. Statistical parametric maps in functional imaging: ageneral linear approach. Human brain mapping, 2(4):189–210, 1994. →pages 4[49] Z. Fu, S.-C. Chan, X. Di, B. Biswal, and Z. Zhang. Adaptive covarianceestimation of non-stationary processes and its application to infer dynamicconnectivity from fmri. Biomedical Circuits and Systems, IEEETransactions on, 8(2):228–239, April 2014. ISSN 1932-4545.doi:10.1109/TBCAS.2014.2306732. → pages 13, 80[50] K. M. Gates and P. C. Molenaar. Group search algorithm recovers effectiveconnectivity maps for individuals in homogeneous and heterogeneoussamples. Neuroimage, 63(1):310–319, 2012. → pages 11[51] T. Ge, J. Feng, D. P. Hibar, P. M. Thompson, and T. E. Nichols. Increasingpower for voxel-wise genome-wide association studies: The random fieldtheory, least square kernel machines and fast permutation procedures.NeuroImage., 63:858–873, 2012. → pages 30[52] D. C. Glahn, A. R. Laird, et al. Meta-analysis of gray matter anomalies inschizophrenia: Application of anatomic likelihood estimation and networkanalysis. Biological Psychiatry, 64(9):774 – 781, 2008. ISSN 0006-3223.→ pages 52[53] D. C. Glahn, A. M. Winkler, P. Kochunov, L. Almasy, R. Duggirala, M. A.Carless, J. C. Curran, R. L. Olvera, A. R. Laird, S. M. Smith, C. F.Beckmann, P. T. Fox, and J. Blangero. Genetic control over the restingbrain. Proceedings of the National Academy of Sciences, 107(3):1223–1228, 2010. → pages 30[54] R. Goebel, A. Roebroeck, D. Kim, and E. Formisano. Investigatingdirected cortical interactions in time-resolved fmri data using vectorautoregressive modeling and granger causality mapping. Magn ResonImaging, 21:12511261, 2003. → pages 6, 8, 13, 31, 55[55] P. I. Good. Permutation tests: A practical guide to resampling methods fortesting hypotheses, volume 2nd edition. Springer (New York), 2000. →pages 40[56] M. D. Greicius, G. Srivastava, A. L. Reiss, and V. Menon. Default-modenetwork activity distinguishes alzheimer’s disease from healthy aging:evidence from functional mri. Proceedings of the National Academy of111Sciences of the United States of America, 101(13):4637–4642, 2004. →pages 7[57] M. D. Greicius, B. H. Flores, V. Menon, G. H. Glover, H. B. Solvason,H. Kenna, A. L. Reiss, and A. F. Schatzberg. Resting-state functionalconnectivity in major depression: abnormally increased contributions fromsubgenual cingulate cortex and thalamus. Biological psychiatry, 62(5):429–437, 2007. → pages 7[58] R. K. Gupta, N. Soni, S. Kumar, and N. Khandelwal. Imaging of centralnervous system viral diseases. Journal of Magnetic Resonance Imaging, 35(3):477–491, 2012. → pages 2[59] M. Guye, G. Bettus, F. Bartolomei, and P. J. Cozzone. Graph theoreticalanalysis of structural and functional connectivity mri in normal andpathological brain networks. Magnetic Resonance Materials in Physics,Biology and Medicine, 23(5-6):409–421, 2010. → pages 14[60] S. Haller, R. Kopel, P. Jhooti, T. Haas, F. Scharnowski, K.-O. Lovblad,K. Scheffler, and D. V. D. Ville. Dynamic reconfiguration of human brainfunctional networks through neurofeedback. NeuroImage, 81:243 – 252,2013. ISSN 1053-8119.doi:http://dx.doi.org/10.1016/j.neuroimage.2013.05.019. URLhttp://www.sciencedirect.com/science/article/pii/S1053811913005132. →pages 13, 79[61] D. A. Handwerker, V. Roopchansingh, J. Gonzalez-Castillo, and P. A.Bandettini. Periodic changes in fmri connectivity. NeuroImage, 63(3):1712– 1719, 2012. ISSN 1053-8119.doi:http://dx.doi.org/10.1016/j.neuroimage.2012.06.078. URLhttp://www.sciencedirect.com/science/article/pii/S1053811912007124. →pages 13, 56, 69, 80[62] L. Harrison, W. Penny, and K. Friston. Multivariate autoregressivemodeling of fmri time series. NeuroImage, 19(4):1477 – 1491, 2003. →pages 6, 9[63] B. J. He. Scale-free properties of the functional magnetic resonanceimaging signal during rest and task. The Journal of neuroscience, 31(39):13786–13795, 2011. → pages 72112[64] R. Heller, D. Stanley, D. Yekutieli, N. Rubin, and Y. Benjamini.Cluster-based analysis of fmri data. NeuroImage, 33(2):599–608, 2006. →pages 8[65] D. P. Hibar, J. L. Stein, et al. Voxelwise gene-wide association study(vgenewas): Multivariate gene-based association testing in 731 elderlysubjects. NeuroImage, 56(4):1875 – 1891, 2011. → pages 30[66] S. Huang, J. Li, L. Sun, J. Ye, A. Fleisher, T. Wu, K. Chen, E. Reiman,A. D. N. Initiative, et al. Learning brain connectivity of alzheimer’s diseaseby sparse inverse covariance estimation. NeuroImage, 50(3):935–949,2010. → pages 10[67] R. M. Hutchison, T. Womelsdorf, J. S. Gati, S. Everling, and R. S. Menon.Resting-state networks show dynamic functional connectivity in awakehumans and anesthetized macaques. Human brain mapping, 2012. →pages 13, 56, 80[68] R. M. Hutchison, T. Womelsdorf, E. A. Allen, P. A. Bandettini, V. D.Calhoun, M. Corbetta, S. D. Penna, J. H. Duyn, G. H. Glover,J. Gonzalez-Castillo, D. A. Handwerker, S. Keilholz, V. Kiviniemi, D. A.Leopold, F. de Pasquale, O. Sporns, M. Walter, and C. Chang. Dynamicfunctional connectivity: Promise, issues, and interpretations. NeuroImage,80(0):360 – 378, 2013. ISSN 1053-8119.doi:http://dx.doi.org/10.1016/j.neuroimage.2013.05.079. URLhttp://www.sciencedirect.com/science/article/pii/S105381191300579X. →pages 55, 56, 57, 73, 79[69] M. J. Jafri, G. D. Pearlson, M. Stevens, and V. D. Calhoun. A method forfunctional network connectivity among spatially independent resting-statecomponents in schizophrenia. Neuroimage, 39(4):1666–1681, 2008. →pages 7[70] A. Jalali, S. Sanghavi, C. Ruan, and P. K. Ravikumar. A dirty model formulti-task learning. In Advances in Neural Information ProcessingSystems, pages 964–972, 2010. → pages 80, 82, 87[71] D. C. Javit. Glutamatergic theories of schizophrenia. Isr J psychiatry relatsci, 47:4–16, 2010. → pages 44[72] D. T. Jones, P. Vemuri, M. C. Murphy, J. L. Gunter, M. L. Senjem, M. M.Machulda, S. A. Przybelski, B. E. Gregg, K. Kantarci, D. S. Knopman,113et al. Non-stationarity in the resting brains modular architecture. PloS one,7(6):e39731, 2012. → pages 13, 56, 73, 80[73] J. Kahan, M. Urner, R. Moran, G. Flandin, A. Marreiros, L. Mancini,M. White, J. Thornton, T. Yousry, L. Zrinzo, et al. Resting state functionalmri in parkinsons disease: the impact of deep brain stimulation oneffectiveconnectivity. Brain, 137(4):1130–1144, 2014. → pages 104[74] J. Kang, L. Wang, C. Yan, J. Wang, X. Liang, and Y. He. Characterizingdynamic functional connectivity in the resting brain using variableparameter regression and kalman filtering approaches. NeuroImage, 56(3):1222–1234, 2011. → pages 13, 56[75] S. Kim and E. P. Xing. Statistical estimation of correlated genomeassociations to a quantitative trait network. PLoS Genet, 5(8):e1000587–,2009. → pages 35, 61[76] M. Kolar, L. Song, and E. P. Xing. Sparsistent learning ofvarying-coefficient models with structural changes. In Advances in NeuralInformation Processing Systems, pages 1006–1014, 2009. → pages 56, 57[77] F. M. Krienen, B. T. T. Yeo, and R. L. Buckner. Reconfigurabletask-dependent functional coupling modes cluster around a core functionalarchitecture. Philosophical Transactions of the Royal Society of London B:Biological Sciences, 369(1653), 2014. ISSN 0962-8436.doi:10.1098/rstb.2013.0526. → pages 13, 79, 80, 97[78] A. Laakso, J. Bergman, M. Haaparanta, H. Vilkman, O. Solin, E. Syvalahti,and J. Hietala. Decreased striatal dopamine transporter binding in vivo inchronic schizophrenia. Schizophrenia Research, 52:115 – 120, 2001. ISSN0920-9964. → pages 52[79] N. Leonardi and D. V. D. Ville. On spurious and real fluctuations ofdynamic functional connectivity during rest. NeuroImage, 104:430 – 436,2015. ISSN 1053-8119.doi:http://dx.doi.org/10.1016/j.neuroimage.2014.09.007. URLhttp://www.sciencedirect.com/science/article/pii/S1053811914007496. →pages 13[80] N. Leonardi, J. Richiardi, M. Gschwind, S. Simioni, J.-M. Annoni,M. Schluep, P. Vuilleumier, and D. V. D. Ville. Principal components offunctional connectivity: A new approach to study dynamic brainconnectivity during rest. NeuroImage, 83(0):937 – 950, 2013. ISSN1141053-8119. doi:http://dx.doi.org/10.1016/j.neuroimage.2013.07.019. URLhttp://www.sciencedirect.com/science/article/pii/S105381191300774X. →pages 12, 13, 14, 79, 85, 86, 101[81] C. M. Lewis, A. Baldassarre, G. Committeri, G. L. Romani, andM. Corbetta. Learning sculpts the spontaneous activity of the restinghuman brain. Proceedings of the National Academy of Sciences, 106(41):17558–17563, 2009. → pages 102[82] J. Li and Z. J. Wang. Controlling the false discovery rate of theassociation/causality structure learned with the pc algorithm. J. Mach.Learn. Res., 10:475–514, June 2009. → pages 6, 10, 20, 21[83] J. Li, Z. J. Wang, S. J. Palmer, and M. J. McKeown. Dynamic bayesiannetwork modeling of fmri: A comparison of group-analysis methods.NeuroImage, 41(2):398 – 407, 2008. → pages 30, 55[84] K. Li, L. Guo, J. Nie, G. Li, and T. Liu. Review of methods for functionalbrain connectivity detection using fmri. Computerized medical imagingand graphics: the official journal of the Computerized Medical ImagingSociety, 33:131–139, 2009. → pages 31[85] X. Li, D. Zhu, X. Jiang, C. Jin, X. Zhang, L. Guo, J. Zhang, X. Hu, L. Li,and T. Liu. Dynamic functional connectomics signatures forcharacterization and differentiation of ptsd patients. Human BrainMapping, 35(4):1761–1778, 2014. ISSN 1097-0193.doi:10.1002/hbm.22290. URL http://dx.doi.org/10.1002/hbm.22290. →pages 12, 73, 79[86] M. A. Lindquist, C. Waugh, and T. D. Wager. Modeling state-related fmriactivity using change-point theory. NeuroImage, 35(3):1125–1141, 2007.→ pages 13, 56, 80[87] J. Listgarten and D. Heckerman. Determining the number of non-spuriousarcs in a learned dag model: Investigation of a bayesian and a frequentistapproach. In Proceedings of the 23rd Conference on Uncertainty inArtificial Intelligence, 2007. → pages 20[88] A. Liu, J. Li, Z. J. Wang, and M. J. McKeown. A computationally efficient,exploratory approach to brain connectivity incorporating false discoveryrate control, a priori knowledge, and group inference. Computational andMathematical Methods in Medicine, 2012, 2012. → pages 21, 30, 31115[89] A. Liu, X. Chen, Z. Wang, Q. Xu, S. Appel-Cresswell, and M. McKeown.A genetically informed, group fmri connectivity modeling approach:Application to schizophrenia. Biomedical Engineering, IEEE Transactionson, 61(3):946–956, March 2014. ISSN 0018-9294.doi:10.1109/TBME.2013.2294151. → pages 58[90] A. Liu, X. Chen, M. McKeown, and Z. Wang. A sticky weighted regressionmodel for time-varying resting-state brain connectivity estimation.Biomedical Engineering, IEEE Transactions on, 62(2):501–510, Feb 2015.ISSN 0018-9294. doi:10.1109/TBME.2014.2359211. → pages 12, 13, 79,101[91] J. Liu, G. Pearlson, A. Windemuth, G. Ruano, N. I. Perrone-Bizzozero, andV. Calhoun. Combining fmri and snp data to investigate connectionsbetween brain function and genetics using parallel ica. Human BrainMapping, 30(1):241–255, 2009. ISSN 1097-0193.doi:10.1002/hbm.20508. URL http://dx.doi.org/10.1002/hbm.20508. →pages 52[92] X. Liu and J. H. Duyn. Time-varying functional network informationextracted from brief instances of spontaneous brain activity. Proceedings ofthe National Academy of Sciences, 110(11):4392–4397, 2013. → pages 13,55, 56, 80[93] P.-L. Loh and P. Bu¨hlmann. High-dimensional learning of linear causalnetworks via inverse covariance estimation. The Journal of MachineLearning Research, 15(1):3065–3105, 2014. → pages 102[94] Y. Lu, T. Jiang, and Y. Zang. A split-merge-based region-growing methodfor fmri activation detection. Hum. Brain Mapp., 22(4):271–279, Aug.2004. → pages 103[95] M.-E. Lynall, D. S. Bassett, R. Kerwin, P. J. McKenna, M. Kitzbichler,U. Muller, and E. Bullmore. Functional connectivity and brain networks inschizophrenia. The Journal of Neuroscience, 30:9477–9487, 2010. →pages 30[96] G. Marrelec, A. Krainik, H. Duffau, M. Pelegrini-Issac, S. Lehericy,J. Doyon, and H. Benali. Partial correlation for functional braininteractivity investigation in functional mri. NeuroImage, 32(1):228 – 237,2006. ISSN 1053-8119. → pages 7116[97] A. McIntosh, C. Grady, L. Ungerleider, J. Haxby, S. Rapoport, andB. Horwitz. Network analysis of cortical visual pathways mapped with pet.The Journal of Neuroscience, 14(2):655–666, 1994. → pages 6, 9[98] M. J. McKeown and G. M. Peavy. Biomarkers in parkinson disease it’stime to combine. Neurology, pages 10–1212, 2015. → pages 105[99] M. J. McKeown, T.-P. Jung, S. Makeig, G. Brown, S. S. Kindermann, T.-W.Lee, and T. J. Sejnowski. Spatially independent activity patterns infunctional mri data during the stroop color-naming task. Proceedings of theNational Academy of Sciences, 95(3):803–810, 1998. → pages 6[100] M. J. McKeown, S. Makeig, G. G. Brown, T.-P. Jung, S. S. Kindermann,A. J. Bell, and T. J. Sejnowski. Analysis of fmri data by blind separationinto independent spatial components. Hum. Brain Mapp., 6:160–188,1998. → pages 6, 7[101] N. Meinshausen and P. Buhlmann. Stability selection. Journal of the RoyalStatistical Society: Series B (Statistical Methodology), 72(4):417–473,2010. ISSN 1467-9868. doi:10.1111/j.1467-9868.2010.00740.x. URLhttp://dx.doi.org/10.1111/j.1467-9868.2010.00740.x. → pages 61[102] S. A. Mitelman, L. Shihabuddin, A. M. Brickman, E. A. Hazlett, and M. S.Buchsbaum. Volume of the cingulate and outcome in schizophrenia.Schizophrenia research, 72(2):91–108, 2005. → pages 52[103] M. Moosmann, T. Eichele, H. Nordby, K. Hugdahl, and V. D. Calhoun.Joint independent component analysis for simultaneous eeg-fmri: Principleand simulation. International Journal of Psychophysiology, 67:212 – 221,2008. ISSN 0167-8760.doi:http://dx.doi.org/10.1016/j.ijpsycho.2007.05.016. URLhttp://www.sciencedirect.com/science/article/pii/S0167876007001602. →pages 52[104] J. A. Mumford and J. D. Ramsey. Bayesian networks for fmri: a primer.Neuroimage, 86:573–582, 2014. → pages 10[105] B. Ng. Prior-informed multivariate models for functional magneticresonance imaging. 2011. → pages 3[106] B. Ng, R. Abugharbieh, X. Huang, and M. McKeown. Spatialcharacterization of fmri activation maps using invariant 3-d momentdescriptors. Medical Imaging, IEEE Transactions on, 28(2):261–268, Feb.→ pages 4117[107] T. E. Nichols and A. P. Holmes. Nonparametric permutation tests forfunctional neuroimaging: a primer with examples. Human brain mapping,15(1):1–25, 2002. → pages 40[108] S. Ogawa, T.-M. Lee, A. R. Kay, and D. W. Tank. Brain magneticresonance imaging with contrast dependent on blood oxygenation.Proceedings of the National Academy of Sciences, 87(24):9868–9872,1990. → pages 2[109] S. Palmer, L. Eigenraam, T. Hoque, R. McCaig, A. Troiano, andM. McKeown. Levodopa-sensitive, dynamic changes in effectiveconnectivity during simultaneous movements in parkinson’s disease.Neuroscience, 158(2):693–704, 2009. → pages 72[110] S. Palmer, J. Li, Z. Wang, and M. McKeown. Joint amplitude andconnectivity compensatory mechanisms in parkinson’s disease.Neuroscience, 166(4):1110–1118, 2010. → pages[111] S. J. Palmer, B. Ng, R. Abugharbieh, L. Eigenraam, and M. J. McKeown.Motor reserve and novel area recruitment: amplitude and spatialcharacteristics of compensation in parkinsons disease. European Journal ofNeuroscience, 29(11):2187–2196, 2009. → pages 26, 72[112] R. S. Patel, F. D. Bowman, and J. K. Rilling. A bayesian approach todetermining connectivity of the human brain. Hum. Brain Mapp., 27(3):267–276, Mar. 2006. → pages 9[113] S. J. Peltier, T. A. Polk, and D. C. Noll. Detecting low-frequency functionalconnectivity in fmri using a self-organizing map (som) algorithm. Humanbrain mapping, 20(4):220–226, 2003. → pages 7[114] D. Precup and P. Bachman. Improved estimation in time varying models.arXiv preprint arXiv:1206.6385, 2012. → pages 57[115] D. Prichard and J. Theiler. Generating surrogate data for time series withseveral simultaneously measured variables. Physical Review Letters, 73(7):951, 1994. → pages 52[116] Z. Qi, X. Wu, Z. Wang, N. Zhang, H. Dong, L. Yao, and K. Li. Impairmentand compensation coexist in amnestic mci default mode network.Neuroimage, 50(1):48–55, 2010. → pages 7[117] J. C. Rajapakse and J. Zhou. Learning effective brain connectivity withdynamic bayesian networks. Neuroimage, 37(3):749–760, 2007. → pages 6118[118] J. D. Ramsey, S. J. Hanson, C. Hanson, Y. O. Halchenko, R. A. Poldrack,and C. Glymour. Six problems for causal inference from fmri.Neuroimage, 49(2):1545–1558, 2010. → pages 10[119] J. D. Ramsey, S. J. Hanson, and C. Glymour. Multi-subject search correctlyidentifies causal connections and most causal directions in the dcm modelsof the smith et al. simulation study. NeuroImage, 58(3):838 – 848, 2011.→ pages 11, 30[120] J. W. Robinson and A. J. Hartemink. Learning non-stationary dynamicbayesian networks. J. Mach. Learn. Res., 9999:3647–3680, December2010. → pages 56[121] S. Ryali, T. Chen, K. Supekar, and V. Menon. Estimation of functionalconnectivity in fmri data using stability selection-based sparse partialcorrelation with elastic net penalty. NeuroImage, 59(4):3852–3861, 2012.→ pages 58[122] S. Saha, D. Chant, J. Welham, and J. McGrath. A systematic review of theprevalence of schizophrenia. PLoS Med, 2(5):124–138, 2005.doi:10.1371/journal.pmed.0020141. → pages 29[123] U. Sakoglu, G. D. Pearlson, K. A. Kiehl, Y. M. Wang, A. M. Michael, andV. D. Calhoun. A method for evaluating dynamic functional networkconnectivity and task-modulation: application to schizophrenia. MagneticResonance Materials in Physics, Biology and Medicine, 23(5-6):351–366,2010. → pages 13, 56, 80[124] J. R. Sato, E. A. Junior, D. Y. Takahashi, M. de Maria Felix, M. J.Brammer, and P. A. Morettin. A method to produce evolving functionalconnectivity maps during the course of an fmri experiment usingwavelet-based time-varying granger causality. NeuroImage, 31(1):187–196, 2006. → pages 12, 13, 56, 79, 102[125] J. R. Sato, D. Y. Takahashi, S. M. Arcuri, K. Sameshima, P. A. Morettin,and L. A. Baccala. Frequency domain connectivity identification: Anapplication of partial directed coherence in fmri. Hum. Brain Mapp., 30(2):452–461, Feb. 2009. → pages 6[126] J. Seibyl, D. Jennings, R. Tabamo, and K. Marek. Neuroimaging trials ofparkinson’s disease progression. Journal of Neurology, 251(0), 2004. →pages 104119[127] S. Seo, J. Mohr, H. Heekeren, A. Heinz, B. Eppinger, S.-C. Li, andK. Obermayer. A voxel selection method for the multivariate analysis ofimaging genetics data. In Neural Networks (IJCNN), The 2012International Joint Conference on, pages 1–7. IEEE, 2012. → pages 104[128] C. E. Shannon. A mathematical theory of communication. SIGMOBILEMob. Comput. Commun. Rev., 5(1):3–55, Jan. 2001. → pages 6[129] F. Shi, P.-T. Yap, W. Gao, W. Lin, J. H. Gilmore, and D. Shen. Alteredstructural connectivity in neonates at genetic risk for schizophrenia: Acombined study using morphological and white matter networks.NeuroImage, 62:1622–1633, 2012. → pages 30[130] S. Shimizu, P. O. Hoyer, A. Hyva¨rinen, and A. Kerminen. A linearnon-gaussian acyclic model for causal discovery. The Journal of MachineLearning Research, 7:2003–2030, 2006. → pages 10[131] S. M. Smith. The future of fmri connectivity. Neuroimage, 62(2):1257–1266, 2012. → pages 10[132] S. M. Smith, K. L. Miller, G. Salimi-Khorshidi, M. Webster, C. F.Beckmann, T. E. Nichols, J. D. Ramsey, and M. W. Woolrich. Networkmodelling methods for fmri. NeuroImage, 54(2):875 – 891, 2011. → pages30, 31[133] L. Song, M. Kolar, and E. P. Xing. Time-varying dynamic bayesiannetworks. In Advances in Neural Information Processing Systems, pages1732–1740, 2009. → pages 56, 57, 58, 64[134] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, andSearch. The MIT Press, 2001. → pages 20[135] C. J. Stam. Modern network science of neurological disorders. NatureReviews Neuroscience, 15(10):683–695, 2014. → pages 6[136] K. E. Stephan, W. D. Penny, J. Daunizeau, R. J. Moran, and K. J. Friston.Bayesian model selection for group studies. NeuroImage, 46(4):1004 –1017, 2009. → pages 11, 30[137] J. D. Storey. A direct approach to false discovery rates. Journal of theRoyal Statistical Society. Series B (Statistical Methodology), 64(3):479–498, 2002. → pages 19120[138] J. Talairach and P. Tournoux. Co-planar stereotaxic atlas of the humanbrain. 3-dimensional proportional system: an approach to cerebral imaging,1988. → pages 103[139] H.-Y. Tan, K. K. Nicodemus, et al. Genetic variation in akt1 is linked todopamine-associated prefrontal cortical structure and function in humans.The Journal of clinical investigation, 118(6):2200–2208, 2008. → pages 29[140] G. J. Thompson, M. E. Magnuson, M. D. Merritt, H. Schwarb, W.-J. Pan,A. McKinley, L. D. Tripp, E. H. Schumacher, and S. D. Keilholz.Short-time windows of correlation between large-scale functional brainnetworks predict vigilance intraindividually and interindividually. Humanbrain mapping, 2012. → pages 13, 56, 80[141] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal ofthe Royal Statistical Society. Series B (Methodological), 58(1):267–288,Jan. 1996. ISSN 00359246. → pages 33, 87, 89[142] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity andsmoothness via the fused lasso. Journal of the Royal Statistical Society:Series B (Statistical Methodology), 67(1):91–108, Feb. 2005. ISSN1467-9868. → pages 37, 57[143] I. Tsamardinos and L. E. Brown. Bounding the false discovery rate in localbayesian network learning. In Proceedings of the Twenty-Third AAAIConference on Artificial Intelligence, 2008. → pages 20[144] P. A. Valdes-Sosa, J. M. Sanchez-Bornot, A. Lage-Castellanos,M. Vega-Hernandez, J. Bosch-Bayard, L. Melie-Garcia, andE. Canales-Rodriguez. Estimating brain functional connectivity with sparsemultivariate autoregression. Philosophical Transactions of the RoyalSociety B: Biological Sciences, 360(1457):969–981, 2005. → pages 33[145] D. Van de Ville, J. Britz, and C. M. Michel. Eeg microstate sequences inhealthy humans at rest reveal scale-free dynamics. Proceedings of theNational Academy of Sciences, 107(42):18179–18184, 2010. → pages 97[146] M. P. Van Den Heuvel and H. E. H. Pol. Exploring the brain network: areview on resting-state fmri functional connectivity. EuropeanNeuropsychopharmacology, 20(8):519–534, 2010. → pages 5[147] A. Vanhaudenhuyse, Q. Noirhomme, L. J.-F. Tshibanda, M.-A. Bruno,P. Boveroux, C. Schnakers, A. Soddu, V. Perlbarg, D. Ledoux, J.-F.121Brichant, et al. Default network connectivity reflects the level ofconsciousness in non-communicative brain-damaged patients. Brain, 133(1):161–171, 2010. → pages 7[148] G. Varoquaux and R. C. Craddock. Learning and comparing functionalconnectomes across subjects. NeuroImage, 80:405–415, 2013. → pages 10[149] G. Varoquaux, A. Gramfort, J. B. Poline, and B. Thirion. Brain covarianceselection: better individual functional connectivity models using populationprior. In Advances in Neural Information Processing Systems, pages2334–2342, 2010. → pages 6, 11, 30[150] G. Varoquaux, A. Gramfort, F. Pedregosa, V. Michel, and B. Thirion.Multi-subject dictionary learning to segment an atlas of brain spontaneousactivity. In Information Processing in Medical Imaging, pages 562–573.Springer, 2011. → pages 10[151] M. Vounou, E. Janousova, R. Wolz, J. L. Stein, P. M. Thompson,D. Rueckert, and G. Montana. Sparse reduced-rank regression detectsgenetic associations with voxel-wise longitudinal phenotypes inalzheimer’s disease. Neuroimage, 60:700–716, 2012. → pages 30[152] J. Wang, L. Wang, Y. Zang, H. Yang, H. Tang, Q. Gong, Z. Chen, C. Zhu,and Y. He. Parcellation-dependent small-world brain functional networks:A resting-state fmri study. Human brain mapping, 30(5):1511–1523, 2009.→ pages 103[153] Z. Wang, E. Kuruoglu, X. Yang, Y. Xu, and T. Huang. Time varyingdynamic bayesian network for nonstationary events modeling and onlineinference. Signal Processing, IEEE Transactions on, 59(4):1553–1568,2011. → pages 56[154] C.-Y. Wee, P.-T. Yap, D. Zhang, L. Wang, and D. Shen. Group-constrainedsparse fmri connectivity modeling for mild cognitive impairmentidentification. Brain Structure and Function, 219(2):641–656, 2014. ISSN1863-2653. doi:10.1007/s00429-013-0524-8. URLhttp://dx.doi.org/10.1007/s00429-013-0524-8. → pages 80[155] T. Wu, L. Wang, Y. Chen, C. Zhao, K. Li, and P. Chan. Changes offunctional connectivity of the motor network in the resting state inparkinson’s disease. Neuroscience letters, 460(1):6–10, 2009. → pages 104122[156] T. Wu, X. Long, L. Wang, M. Hallett, Y. Zang, K. Li, and P. Chan.Functional connectivity of cortical motor areas in the resting state inparkinson’s disease. Human brain mapping, 32(9):1443–1457, 2011. →pages 104[157] M. Yuan and Y. Lin. Model selection and estimation in regression withgrouped variables. Journal of the Royal Statistical Society: Series B(Statistical Methodology), 68(1):49–67, 2006. → pages 87, 89[158] Y. Zhang, A. Liu, S. N. Tan, M. J. McKeown, and Z. J. Wang.Connectivity-based parcellation of putamen using resting state fmri data. InBiomedical Imaging (ISBI), 2015 IEEE 12th International Symposium on,pages 34–37. IEEE, 2015. → pages 104[159] J. Zhou, J. Chen, and J. Ye. Malsar: Multi-task learning via structuralregularization. Arizona State University, 2011. → pages 83[160] S. Zhou, J. Lafferty, and L. Wasserman. Time varying undirected graphs.Machine Learning, 80(2-3):295–319, 2010. ISSN 0885-6125.doi:10.1007/s10994-010-5180-0. URLhttp://dx.doi.org/10.1007/s10994-010-5180-0. → pages 56[161] H. Zou and T. Hastie. Regularization and variable selection via the elasticnet. Journal of the Royal Statistical Society: Series B (StatisticalMethodology), 67(2):301–320, 2005. ISSN 1467-9868. → pages 33123


Citation Scheme:


Citations by CSL (citeproc-js)

Usage Statistics



Customize your widget with the following options, then copy and paste the code below into the HTML of your page to embed this item in your website.
                            <div id="ubcOpenCollectionsWidgetDisplay">
                            <script id="ubcOpenCollectionsWidget"
                            async >
IIIF logo Our image viewer uses the IIIF 2.0 standard. To load this item in other compatible viewers, use this url:


Related Items