- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Beyond catastrophic forgetting : advancing continual...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Beyond catastrophic forgetting : advancing continual learning for robust and fair medical image analysis Bayasi, Nourhan
Abstract
Traditional Deep Learning (DL) models struggle to adapt to evolving environments, making them ineffective for clinical scenarios where data continuously changes. Continual Learning (CL) offers a promising solution by enabling models to learn sequentially while addressing catastrophic forgetting. However, challenges remain in resource efficiency, generalization, adaptation, fairness and the inefficient integration of pretrained models, especially in medical image analysis. To tackle these challenges, this dissertation proposes several novel methods. For resource efficiency, we introduce Culprit-Prune-Net (CPN), a fixed-size network that sequentially learns tasks by dynamically allocating subnetworks using a culpability-based pruning strategy. To improve generalization, we develop the Generalizable Continual Classification Network (GC²), which enhances CPN with a mechanism for learning representations adaptable to unseen domains, and BoosterNet, which leverages a core network’s errors to improve single-domain generalization without altering its architecture or training process. For adaptation, we propose Continual-GEN, an online learning method that overcomes the need for explicit task boundary information by using an ensemble of clusters to measure image batch similarity, dynamically assigning or updating subnetworks. To address fairness, we introduce BiasPruner, which mitigates bias transfer by selectively forgetting biased decision-making units, enhancing minority group performance over time. Finally, we present Continual-Zoo, a framework that integrates pretrained networks into CL, reducing computational overhead and improving generalization for medical downstream tasks. Evaluations on multiple medical imaging datasets show our methods significantly outperform existing baselines and State-Of-The-Art (SOTA) CL techniques.
Item Metadata
Title |
Beyond catastrophic forgetting : advancing continual learning for robust and fair medical image analysis
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2025
|
Description |
Traditional Deep Learning (DL) models struggle to adapt to evolving environments, making them ineffective for clinical scenarios where data continuously changes. Continual Learning (CL) offers a promising solution by enabling models to learn sequentially while addressing catastrophic forgetting. However, challenges remain in resource efficiency, generalization, adaptation, fairness and the inefficient integration of pretrained models, especially in medical image analysis. To tackle these challenges, this dissertation proposes several novel methods. For resource efficiency, we introduce Culprit-Prune-Net (CPN), a fixed-size network that sequentially learns tasks by dynamically allocating subnetworks using a culpability-based pruning strategy. To improve generalization, we develop the Generalizable Continual Classification Network (GC²), which enhances CPN with a mechanism for learning representations adaptable to unseen domains, and BoosterNet, which leverages a core network’s errors to improve single-domain generalization without altering its architecture or training process. For adaptation, we propose Continual-GEN, an online learning method that overcomes the need for explicit task boundary information by using an ensemble of clusters to measure image batch similarity, dynamically assigning or updating subnetworks. To address fairness, we introduce BiasPruner, which mitigates bias transfer by selectively forgetting biased decision-making units, enhancing minority group performance over time. Finally, we present Continual-Zoo, a framework that integrates pretrained networks into CL, reducing computational overhead and improving generalization for medical downstream tasks. Evaluations on multiple medical imaging datasets show our methods significantly outperform existing baselines and State-Of-The-Art (SOTA) CL techniques.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2025-03-06
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0448180
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2025-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International