- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Incremental learning and federated learning for heterogeneous...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Incremental learning and federated learning for heterogeneous medical image analysis Ayromlou, Sana
Abstract
Standard deep learning paradigm may not be practical over real-world heterogeneous medical data, where new disease merges over time with data acquired in a distributed manner across various hospitals. There have been approaches to facilitate the training of deep models over two primary categories of heterogeneity, including 1) class incremental learning, which offers a promising solution for sequential heterogeneity by adapting a deep network trained on previous disease classes to handle newly introduced diseases over time; 2) federated learning, which offers a promising solution for distributed heterogeneity, by training a global model on a centralized server over private datasets of various hospitals or clients without requiring them to share data. The core challenge in both approaches is catastrophic forgetting, which refers to performance degradation on previously trained data when adapting a model to available data. Due to strict patient privacy regulations, storing and sharing medical data are often discouraged, posing a significant hurdle in addressing such a forgetting. We propose to leverage medical data synthesis to recover inaccessible medical data in heterogeneous learning, presenting two distinctive novel frameworks. Our first framework introduces a novel two-step, data-free class incremental learning pipeline. Initially, it synthesizes data by inverting trained model weights on previous classes and matching statistics saved in continual normalization layers to obtain continual class-specific samples. Subsequently, the model is updated by incorporating three novel loss functions to enhance the utility of synthesized data and mitigate forgetting. Extensive experiments demonstrate that the proposed framework achieves comparative results with state-of-the-art methods on four public MedMNIST datasets and an in-house heart echocardiography dataset. We propose our second framework as a novel federated learning approach to mitigate forgetting by generating and utilizing united global synthetic data among clients. First, we proposed constrained model inversion over the server model to enforce an information-preserving property in synthetic data and leverage the global distribution captured in the globally aggregated server model. Then, we utilize this synthetic data alongside the local data to enhance the generalization capabilities of local training. Extensive experiments show that the proposed method achieves state-of-the-art performance on the BloodMNIST and Retina datasets.
Item Metadata
Title |
Incremental learning and federated learning for heterogeneous medical image analysis
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2023
|
Description |
Standard deep learning paradigm may not be practical over real-world heterogeneous medical data, where new disease merges over time with data acquired in a distributed manner across various hospitals. There have been approaches to facilitate the training of deep models over two primary categories of heterogeneity, including 1) class incremental learning, which offers a promising solution for sequential heterogeneity by adapting a deep network trained on previous disease classes to handle newly introduced diseases over time; 2) federated learning, which offers a promising solution for distributed heterogeneity, by training a global model on a centralized server over private datasets of various hospitals or clients without requiring them to share data. The core challenge in both approaches is catastrophic forgetting, which refers to performance degradation on previously trained data when adapting a model to available data. Due to strict patient privacy regulations, storing and sharing medical data are often discouraged, posing a significant hurdle in addressing such a forgetting. We propose to leverage medical data synthesis to recover inaccessible medical data in heterogeneous learning, presenting two distinctive novel frameworks. Our first framework introduces a novel two-step, data-free class incremental learning pipeline. Initially, it synthesizes data by inverting trained model weights on previous classes and matching statistics saved in continual normalization layers to obtain continual class-specific samples. Subsequently, the model is updated by incorporating three novel loss functions to enhance the utility of synthesized data and mitigate forgetting. Extensive experiments demonstrate that the proposed framework achieves comparative results with state-of-the-art methods on four public MedMNIST datasets and an in-house heart echocardiography dataset. We propose our second framework as a novel federated learning approach to mitigate forgetting by generating and utilizing united global synthetic data among clients. First, we proposed constrained model inversion over the server model to enforce an information-preserving property in synthetic data and leverage the global distribution captured in the globally aggregated server model. Then, we utilize this synthetic data alongside the local data to enhance the generalization capabilities of local training. Extensive experiments show that the proposed method achieves state-of-the-art performance on the BloodMNIST and Retina datasets.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2023-10-18
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0437226
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2023-11
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International