- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Variational learning for latent Gaussian model of discrete...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Variational learning for latent Gaussian model of discrete data Khan, Mohammad
Abstract
This thesis focuses on the variational learning of latent Gaussian models for discrete data. The learning is difficult since the discrete-data likelihood is not conjugate to the Gaussian prior. Existing methods to solve this problem are either inaccurate or slow. We consider a variational approach based on evidence lower bound optimization. We solve the following two main problems of the variational approach: the computational inefficiency associated with the maximization of the lower bound and the intractability of the lower bound. For the first problem, we establish concavity of the lower bound and design fast learning algorithms using concave optimization. For the second problem, we design tractable and accurate lower bounds, some of which have provable error guarantees. We show that these lower bounds not only make accurate variational learning possible, but can also give rise to algorithms with a wide variety of speed-accuracy trade-offs. We compare various lower bounds, both theoretically and experimentally, giving clear design guidelines for variational algorithms. Through application to real-world data, we show that the variational approach can be more accurate and faster than existing methods.
Item Metadata
Title |
Variational learning for latent Gaussian model of discrete data
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2012
|
Description |
This thesis focuses on the variational learning of latent Gaussian models for discrete data.
The learning is difficult since the discrete-data likelihood is not conjugate to the Gaussian prior.
Existing methods to solve this problem are either inaccurate or slow.
We consider a variational approach based on evidence lower bound optimization.
We solve the following two main problems of the variational approach: the computational inefficiency associated with the maximization of the lower bound and the intractability of the lower bound.
For the first problem, we establish concavity of the lower bound and design fast learning algorithms using concave optimization.
For the second problem, we design tractable and accurate lower bounds, some of which have provable error guarantees.
We show that these lower bounds not only make accurate variational learning possible, but can also give rise to algorithms with a wide variety of speed-accuracy trade-offs.
We compare various lower bounds, both theoretically and experimentally, giving clear design guidelines for variational algorithms.
Through application to real-world data, we show that the variational approach can be more accurate and faster than existing methods.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2012-12-01
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0052219
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2013-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International