- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Deep kernel mean embeddings for generative modeling...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Deep kernel mean embeddings for generative modeling and feedforward style transfer Chen, Tian Qi
Abstract
The generation of data has traditionally been specified using hand-crafted algorithms. However, oftentimes the exact generative process is unknown while only a limited number of samples are observed. One such case is generating images that look visually similar to an exemplar image or as if coming from a distribution of images. We look into learning the generating process by constructing a similarity function that measures how close the generated image is to the target image. We discuss a framework in which the similarity function is specified by a pre-trained neural network without fine-tuning, as is the case for neural texture synthesis, and a framework where the similarity function is learned along with the generative process in an adversarial setting, as is the case for generative adversarial networks. The main point of discussion is the combined use of neural networks and maximum mean discrepancy as a versatile similarity function. Additionally, we describe an improvement to state-of-the-art style transfer that allows faster computations while maintaining generality of the generating process. The proposed objective has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame- by-frame performance on video. We use 80,000 natural images and 80,000 paintings to train a procedure for artistic style transfer that is efficient but also allows arbitrary content and style images.
Item Metadata
Title |
Deep kernel mean embeddings for generative modeling and feedforward style transfer
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2017
|
Description |
The generation of data has traditionally been specified using hand-crafted
algorithms. However, oftentimes the exact generative process is unknown
while only a limited number of samples are observed. One such case is
generating images that look visually similar to an exemplar image or as if
coming from a distribution of images. We look into learning the generating
process by constructing a similarity function that measures how close the
generated image is to the target image. We discuss a framework in which
the similarity function is specified by a pre-trained neural network without
fine-tuning, as is the case for neural texture synthesis, and a framework
where the similarity function is learned along with the generative process
in an adversarial setting, as is the case for generative adversarial networks.
The main point of discussion is the combined use of neural networks and
maximum mean discrepancy as a versatile similarity function. Additionally, we describe an improvement to state-of-the-art style transfer
that allows faster computations while maintaining generality of the generating
process. The proposed objective has desirable properties such as a simpler
optimization landscape, intuitive parameter tuning, and consistent frame-
by-frame performance on video. We use 80,000 natural images and 80,000
paintings to train a procedure for artistic style transfer that is efficient but
also allows arbitrary content and style images.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2017-08-16
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0354397
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2017-09
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International