- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Light field spatial and angular super-resolution
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Light field spatial and angular super-resolution Wafa, Abrar
Abstract
Light field technology offers a truly immersive experience, having the potential to revolutionize entertainment, education, virtual and augmented reality, autonomous driving, and digital health. There are two common techniques for capturing LF: multiple-camera arrays and microlens array (i.e., plenoptic camera). However, LF capturing techniques face a trade-off between spatial and angular resolution. That is, camera arrays capture high-spatial-resolution LF images with sparse angular sampling (i.e., fewer views), while plenoptic cameras capture dense angular sampling (i.e., more views) with low-spatial-resolution LF images due to the size of the plenoptic camera sensors. Since constructing an LF camera array is expensive, time consuming, and often impractical, many efforts have been made to improve the spatial resolution of LF images captured by plenoptic cameras. In this thesis, we propose two novel methods for LF spatial super-resolution (SR). First, we propose a learning-based model for spatial SR which takes advantage of the epipolar image plane (EPI) information to ensure smooth disparity between the generated views and in turn construct high-spatial-resolution LF images. In our second contribution, we exploit the full four-dimensional (4D) LF images by proposing a deep-learning spatial SR approach that considers the spatial and angular information (i.e., information within each view and information among other views) and progressively reconstructs high-resolution LF images at different upscaling levels. Another challenge when dealing with LF images is the enormous amount of data generated, as they require significant increase in bandwidth. A possible solution may be to drop specific views at the transmitting end and effectively synthesize them at the receiver end, thus minimizing the amount of data needed to be transferred or stored. Accordingly, our third contribution focuses on LF angular SR by synthesizing virtual LF views from a sparse set of input views using two novel approaches. First, a deep recursive residual network is applied using the EPI information to generate one in-between view. Second, a generative adversarial network approach is proposed, which generates up to five in-between views, using LF spatial and angular information for an efficient angular SR with minimal impact on the visual quality on the generated LF content.
Item Metadata
Title |
Light field spatial and angular super-resolution
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2023
|
Description |
Light field technology offers a truly immersive experience, having the potential to revolutionize entertainment, education, virtual and augmented reality, autonomous driving, and digital health. There are two common techniques for capturing LF: multiple-camera arrays and microlens array (i.e., plenoptic camera). However, LF capturing techniques face a trade-off between spatial and angular resolution. That is, camera arrays capture high-spatial-resolution LF images with sparse angular sampling (i.e., fewer views), while plenoptic cameras capture dense angular sampling (i.e., more views) with low-spatial-resolution LF images due to the size of the plenoptic camera sensors.
Since constructing an LF camera array is expensive, time consuming, and often impractical, many efforts have been made to improve the spatial resolution of LF images captured by plenoptic cameras. In this thesis, we propose two novel methods for LF spatial super-resolution (SR). First, we propose a learning-based model for spatial SR which takes advantage of the epipolar image plane (EPI) information to ensure smooth disparity between the generated views and in turn construct high-spatial-resolution LF images. In our second contribution, we exploit the full four-dimensional (4D) LF images by proposing a deep-learning spatial SR approach that considers the spatial and angular information (i.e., information within each view and information among other views) and progressively reconstructs high-resolution LF images at different upscaling levels.
Another challenge when dealing with LF images is the enormous amount of data generated, as they require significant increase in bandwidth. A possible solution may be to drop specific views at the transmitting end and effectively synthesize them at the receiver end, thus minimizing the amount of data needed to be transferred or stored. Accordingly, our third contribution focuses on LF angular SR by synthesizing virtual LF views from a sparse set of input views using two novel approaches. First, a deep recursive residual network is applied using the EPI information to generate one in-between view. Second, a generative adversarial network approach is proposed, which generates up to five in-between views, using LF spatial and angular information for an efficient angular SR with minimal impact on the visual quality on the generated LF content.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2024-01-31
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0422497
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2023-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International