BIRS Workshop Lecture Videos

Banff International Research Station Logo

BIRS Workshop Lecture Videos

Learning Large-Scale Brain Networks for Twin fMRI Chung, Moo


In many human brain network studies, we do not have sufficient number (n) of images relative to the number (p) of voxels due to the prohibitively expensive cost of scanning enough subjects. Thus, brain network models usually suffer the small-n large-p problem. Such a problem is often remedied by sparse network models, which are usually solved numerically by optimizing L1-penalties. Unfortunately, due to the computational bottleneck associated with optimizing L1-penalties, it is not practical to apply such methods to learn large-scale brain networks. In this paper, we introduce a new sparse network model based on cross-correlations that bypass the computational bottleneck. Our model can build sparse brain networks at the voxel level with p > 25000. Instead of using a single sparse parameter that may not be optimal in other studies and datasets, the computational speed gain enables us to analyze the collection of networks at every possible sparse parameter in a coherent mathematical framework via persistent homology. The method is subsequently applied in determining the extent of heritability on functional brain networks at the voxel-level for the first time using twin fMRI. This is a joint work with Paul Rathouz of University of Wisconsin-Madison, David Zald of Vanderbilt University and Benjamin Lahey of University of Chicago.

Item Media

Item Citations and Data


Attribution-NonCommercial-NoDerivatives 4.0 International