- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Decoding neural representations in higher-order visual...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Decoding neural representations in higher-order visual cortex : an explainable AI-based fMRI analysis Khademi, Mahmoud
Abstract
Early visual cortex has a retinotopic organization, with neurons tuned to specific properties such as motion and color. However, the governing principles of the higher-order visual cortex remain a topic of ongoing debate. Some studies suggest distinct modular cortical regions for specific image categories, while others propose distributed overlapping representations. This thesis aims to employ deep learning-based functional magnetic resonance imaging (fMRI) analysis combined with explainable artificial intelligence (AI) methods, to explore the topological organization of the ventral temporal visual cortex. Using BOLD5000 dataset with fMRI responses from four participants exposed to thousands of images, we defined eight super-categories: body, face, animal, outdoor, etc. A neural network model was trained for each super-category to discern its presence or absence based on whole-brain fMRI responses. Each model achieved significantly above-chance area under the curve (AUC) performance compared to a randomized model, even when tested on different participants, suggesting a common topological organization underlying visual object representations. Using an explainable AI technique, we identified cortical regions responsible for the model's decisions on each super-category. The fusiform face area (FFA) and occipital face area (OFA) were highly sensitive to faces, while extrastriate body area (EBA) was more sensitive to bodies, and parahippocampal place area (PPA) to outdoor scenes. Other super-categories showed less distinct cortical specialization across participants. To explore further, we assessed the impact of masking specific brain regions. For faces, preserving FFA and OFA maintained the model's AUC, while masking them significantly decreased it but did not affect other super-categories, supporting modular organization. Similarly, masking EBA and PPA lowered performance for body and outdoor super-categories. However, even with masked regions, models still outperformed random models, indicating partial involvement of distributed representations. These findings suggest that both modular and distributed representations coexist, with a stronger modular organization for face, body, and outdoor super-categories. This study leverages advanced explainable AI techniques to bridge the gap between complex deep learning models as neural representation decoders and their interpretability. This methodological framework holds promise for future research in visual neuroscience, paving the way for a richer understanding of the neural basis of visual processing and cognition.
Item Metadata
Title |
Decoding neural representations in higher-order visual cortex : an explainable AI-based fMRI analysis
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2024
|
Description |
Early visual cortex has a retinotopic organization, with neurons tuned to specific properties such as motion and color. However, the governing principles of the higher-order visual cortex remain a topic of ongoing debate. Some studies suggest distinct modular cortical regions for specific image categories, while others propose distributed overlapping representations. This thesis aims to employ deep learning-based functional magnetic resonance imaging (fMRI) analysis combined with explainable artificial intelligence (AI) methods, to explore the topological organization of the ventral temporal visual cortex. Using BOLD5000 dataset with fMRI responses from four participants exposed to thousands of images, we defined eight super-categories: body, face, animal, outdoor, etc. A neural network model was trained for each super-category to discern its presence or absence based on whole-brain fMRI responses. Each model achieved significantly above-chance area under the curve (AUC) performance compared to a randomized model, even when tested on different participants, suggesting a common topological organization underlying visual object representations. Using an explainable AI technique, we identified cortical regions responsible for the model's decisions on each super-category. The fusiform face area (FFA) and occipital face area (OFA) were highly sensitive to faces, while extrastriate body area (EBA) was more sensitive to bodies, and parahippocampal place area (PPA) to outdoor scenes. Other super-categories showed less distinct cortical specialization across participants. To explore further, we assessed the impact of masking specific brain regions. For faces, preserving FFA and OFA maintained the model's AUC, while masking them significantly decreased it but did not affect other super-categories, supporting modular organization. Similarly, masking EBA and PPA lowered performance for body and outdoor super-categories. However, even with masked regions, models still outperformed random models, indicating partial involvement of distributed representations. These findings suggest that both modular and distributed representations coexist, with a stronger modular organization for face, body, and outdoor super-categories. This study leverages advanced explainable AI techniques to bridge the gap between complex deep learning models as neural representation decoders and their interpretability. This methodological framework holds promise for future research in visual neuroscience, paving the way for a richer understanding of the neural basis of visual processing and cognition.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2024-09-03
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0445288
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2024-11
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Loading media...
Item Citations and Data
Permanent URL (DOI):
Copied to clipboard.Rights
Attribution-NonCommercial-NoDerivatives 4.0 International