- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- Opening the analysis black box: Improving robustness...
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
Opening the analysis black box: Improving robustness and interpretation Brown, Matthew
Description
Neuroimaging data are very high dimensional and complicated. A human scientist cannot apprehend the raw data directly. One primary purpose of neuroimaging data analysis is to abstract away most of the dimensionality and complexity in the data by extracting just a small number of significant patterns from it. This analysis involves a long chain of steps that interact with the data at various points. Ideally, each step would "just work", yielding reliable outputs robust to noise and complexity in the data. In practice, the analysis can fail at various steps due to a host of reasons such as the influence of noise, bad convergence in some optimization algorithm, and so on. However, the final output of the analysis often provides no indication that such failures have occurred. By design, the analysis abstracts away the complexity of both the data and how the data interacts with the analysis itself. The analysis ends up hiding such failures, so it is necessary to look for them deliberately. Another important consideration is that the analysis often abstracts away too much of the structure in the neuroimaging data. Meaningful patterns go undetected. I will discuss several approaches for delving into what the data analysis is doing to allow for improved robustness through quality assurance checking as well as improved interpretation through consideration of important patterns in the data that often go unnoticed.
Item Metadata
Title |
Opening the analysis black box: Improving robustness and interpretation
|
Creator | |
Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
Date Issued |
2016-02-05T10:06
|
Description |
Neuroimaging data are very high dimensional and complicated. A human scientist cannot apprehend the raw data directly. One primary purpose of neuroimaging data analysis is to abstract away most of the dimensionality and complexity in the data by extracting just a small number of significant patterns from it. This analysis involves a long chain of steps that interact with the data at various points. Ideally, each step would "just work", yielding reliable outputs robust to noise and complexity in the data. In practice, the analysis can fail at various steps due to a host of reasons such as the influence of noise, bad convergence in some optimization algorithm, and so on. However, the final output of the analysis often provides no indication that such failures have occurred. By design, the analysis abstracts away the complexity of both the data and how the data interacts with the analysis itself. The analysis ends up hiding such failures, so it is necessary to look for them deliberately. Another important consideration is that the analysis often abstracts away too much of the structure in the neuroimaging data. Meaningful patterns go undetected. I will discuss several approaches for delving into what the data analysis is doing to allow for improved robustness through quality assurance checking as well as improved interpretation through consideration of important patterns in the data that often go unnoticed.
|
Extent |
34 minutes
|
Subject | |
Type | |
File Format |
video/mp4
|
Language |
eng
|
Notes |
Author affiliation: University of Alberta
|
Series | |
Date Available |
2016-08-06
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0307399
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Other
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International