- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Designing deep neural networks for high dimensional...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Designing deep neural networks for high dimensional problems : leveraging the connection between deep neural networks and differential equations Lensink, Keegan
Abstract
This thesis explores the design of deep neural network architectures that inherit and leverage the properties of differential equations and numerical methods to address challenges that arise when applying deep learning to problems with high-dimensional inputs and outputs. We show through numerical experiments that not only are these properties inherited by the resulting networks, but that they are beneficial for various applications in scientific computing. The first network introduced, IMEXNet, is a novel deep neural network architecture that is motivated by implicit-explicit methods for solving differential equations. IMEXNet introduces an implicit step that connects all pixels in each channel of the image and therefore addresses the field of view problem while still being comparable to standard convolutions in terms of the number of parameters and computational complexity. Compared to similar explicit networks, such as residual networks, the network has improved forward stability, which has recently shown to reduce the sensitivity to small changes in the input features and improve generalization. The second network, HyperNet, is a fully reversible neural network architecture inspired by reversible hyperbolic differential equations and the leapfrog discretization method. Addressing memory constraints associated with conventional neural networks, and even partially reversible networks, HyperNet introduces a reversible coarsening operation utilizing a learnable form of the Discrete Wavelet Transform. This enables the design of networks with constant memory requirements, irrespective of the number of layers or blocks. We also extend the use of the networks to Variational Auto Encoders, where optimization begins from an exact recovery and we discover the level of compression through minimizing a regularized objective. Its properties are applied to the real-world task of segmenting pulmonary opacification in high-resolution 3D chest CT scans of COVID-19 patients, accompanied by a soft-labelling methodology to address the inter-observer variability inherent in the tasks, further demonstrating its feasibility for large-scale applications with modest hardware.
Item Metadata
Title |
Designing deep neural networks for high dimensional problems : leveraging the connection between deep neural networks and differential equations
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2025
|
Description |
This thesis explores the design of deep neural network architectures that inherit and leverage the properties of differential equations and numerical methods to address challenges that arise when applying deep learning to problems with high-dimensional inputs and outputs. We show through numerical experiments that not only are these properties inherited by the resulting networks, but that they are beneficial for various applications in scientific computing. The first network introduced, IMEXNet, is a novel deep neural network architecture that is motivated by implicit-explicit methods for solving differential equations. IMEXNet introduces an implicit step that connects all pixels in each channel of the image and therefore addresses the field of view problem while still being comparable to standard convolutions in terms of the number of parameters and computational complexity. Compared to similar explicit networks, such as residual networks, the network has improved forward stability, which has recently shown to reduce the sensitivity to small changes in the input features and improve generalization. The second network, HyperNet, is a fully reversible neural network architecture inspired by reversible hyperbolic differential equations and the leapfrog discretization method. Addressing memory constraints associated with conventional neural networks, and even partially reversible networks, HyperNet introduces a reversible coarsening operation utilizing a learnable form of the Discrete Wavelet Transform. This enables the design of networks with constant memory requirements, irrespective of the number of layers or blocks. We also extend the use of the networks to Variational Auto Encoders, where optimization begins from an exact recovery and we discover the level of compression through minimizing a regularized objective. Its properties are applied to the real-world task of segmenting pulmonary opacification in high-resolution 3D chest CT scans of COVID-19 patients, accompanied by a soft-labelling methodology to address the inter-observer variability inherent in the tasks, further demonstrating its feasibility for large-scale applications with modest hardware.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2025-03-19
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0448223
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2025-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International