- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Linear and parallel learning of Markov random fields
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Linear and parallel learning of Markov random fields Dror Mizrahi, Yariv
Abstract
In this thesis, we introduce a new class of embarrassingly parallel parameter learning algorithms for Markov random fields (MRFs) with untied parameters, which are efficient for a large class of practical models. The algorithms parallelize naturally over cliques and, for graphs of bounded degree, have complexity that is linear in the number of cliques. We refer to these algorithms with the acronym LAP, which stands for Linear And Parallel. Unlike their competitors, the marginal versions of the proposed algorithms are fully parallel and for log-linear models they are also data efficient, requiring only the local sufficient statistics of the data to estimate parameters. LAP algorithms are ideal for parameter learning in big graphs and big data applications. The correctness of the newly proposed algorithms relies heavily on the existence and uniqueness of the normalized potential representation of an MRF. We capitalize on this theoretical result to develop a new theory of correctness and consistency of LAP estimators corresponding to different local graph neighbourhoods. This theory also establishes a general condition on composite likelihood decompositions of MRFs that guarantees the global consistency of distributed estimators, provided the local estimators are consistent. We introduce a conditional variant of LAP that enables us to attack parameter estimation of fully-observed models of arbitrary connectivity, including fully connected Boltzmann distributions. Once again, we show consistency for this distributed estimator, and relate it to distributed pseudo-likelihood estimators. Finally, for linear and non-linear inverse problems with a sparse forward operator, we present a new algorithm, named iLAP, which decomposes the inverse problem into a set of smaller dimensional inverse problems that can be solved independently. This parallel estimation strategy is also memory efficient.
Item Metadata
Title |
Linear and parallel learning of Markov random fields
|
Creator | |
Publisher |
University of British Columbia
|
Date Issued |
2014
|
Description |
In this thesis, we introduce a new class of embarrassingly parallel parameter learning algorithms for Markov random fields (MRFs) with untied parameters, which are efficient for a large class of practical models. The algorithms parallelize naturally over cliques and, for graphs of bounded degree, have complexity that is linear in the number of cliques. We refer to these algorithms with the acronym LAP, which stands for Linear And Parallel. Unlike their competitors, the marginal versions of the proposed algorithms are fully parallel and for log-linear models they are also data efficient, requiring only the local sufficient statistics of the data to estimate parameters. LAP algorithms are ideal for parameter learning in big graphs and big data applications.
The correctness of the newly proposed algorithms relies heavily on the existence and uniqueness of the normalized potential representation of an MRF. We capitalize on this theoretical result to develop a new theory of correctness and consistency of LAP estimators corresponding to different local graph neighbourhoods. This theory also establishes a general condition on composite likelihood decompositions of MRFs that guarantees the global consistency of distributed estimators, provided the local estimators are consistent.
We introduce a conditional variant of LAP that enables us to attack parameter estimation of fully-observed models of arbitrary connectivity, including fully connected Boltzmann distributions. Once again, we show consistency for this distributed estimator, and relate it to distributed pseudo-likelihood estimators.
Finally, for linear and non-linear inverse problems with a sparse forward operator, we present a new algorithm, named iLAP, which decomposes the inverse problem into a set of smaller dimensional inverse problems that can be solved independently. This parallel estimation strategy is also memory efficient.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2014-12-09
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivs 2.5 Canada
|
DOI |
10.14288/1.0167069
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2015-02
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivs 2.5 Canada