- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- From videos to animatable 3d neural characters
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
From videos to animatable 3d neural characters Su, Shih-Yang
Abstract
Realistic 3D human models have extensive applications across various domains, including entertainment, healthcare, sports, fashion, and more. The challenges in recreating life-like, high-fidelity virtual humans lie in capturing the subtle, nuanced expressions and intricate, complex body dynamics. Consequently, the human digitalization process often requires sophisticated, tailor-made multi-camera capture studios and high-precision motion-tracking systems, limiting accessibility to only a selected few. While recent developments in deep learning have made modeling virtual characters from videos possible, existing approaches still rely on template meshes and 3D surface priors constructed from accurate 3D scans, labels, and multi-view captures. In this dissertation, we take steps in template-free 3D digitalization, enabling 3D animatable human modeling directly from video footage without 3D annotations and surface priors. Our important contributions include: 1) an analysis-by-synthesis framework for jointly learning 3D body shape, appearance, and pose directly from monocular videos; 2) a disentangled body feature representation without pre-defined 3D surfaces for sample-efficient learning and unseen animation generalization; 3) a memory-efficient factorized volume representation for capturing local appearance and geometry structures; 4) a hybrid human body model combining point-based and neural-fields representations for creating 3D avatars with detailed and consistent appearances. In conclusion, we develop approaches that build upon each other to advance the technologies for accessible human digitalization.
Item Metadata
Title |
From videos to animatable 3d neural characters
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2024
|
Description |
Realistic 3D human models have extensive applications across various domains, including entertainment, healthcare, sports, fashion, and more. The challenges in recreating life-like, high-fidelity virtual humans lie in capturing the subtle, nuanced expressions and intricate, complex body dynamics. Consequently, the human digitalization process often requires sophisticated, tailor-made multi-camera capture studios and high-precision motion-tracking systems, limiting accessibility to only a selected few. While recent developments in deep learning have made modeling virtual characters from videos possible, existing approaches still rely on template meshes and 3D surface priors constructed from accurate 3D scans, labels, and multi-view captures.
In this dissertation, we take steps in template-free 3D digitalization, enabling 3D animatable human modeling directly from video footage without 3D annotations and surface priors. Our important contributions include: 1) an analysis-by-synthesis framework for jointly learning 3D body shape, appearance, and pose directly from monocular videos; 2) a disentangled body feature representation without pre-defined 3D surfaces for sample-efficient learning and unseen animation generalization; 3) a memory-efficient factorized volume representation for capturing local appearance and geometry structures; 4) a hybrid human body model combining point-based and neural-fields representations for creating 3D avatars with detailed and consistent appearances. In conclusion, we develop approaches that build upon each other to advance the technologies for accessible human digitalization.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2024-03-07
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0440644
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2024-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International