- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Approximate posterior inference for Bayesian nonparametrics...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Approximate posterior inference for Bayesian nonparametrics with total variation error bounds Li, Xinglong
Abstract
Bayesian nonparametric (BNP) models provide a flexible and powerful framework for statistical modeling by allowing the number of features or subgroups within a population to grow with the data volume. However, posterior inference in BNP models is challenging due to the infinite parameters involved, and the lack of general and efficient inference procedures impedes their practical application. Exact posterior inference methods either analytically marginalize out the infinitely many parameters, or introduce auxiliary variables to adaptively adjust the model size during inference. The former approach relies on conjugacy relationships between priors and likelihoods and suffers from high computational costs. Similarly, the latter approach is also computationally demanding, as it requires numerical integration during sampling for nonconjugate models. An alternative common practice in fitting BNP models involves approximating the nonparametric model with a parametric one, and subsequently applying a standard inference algorithm. While this is practical, parametric truncation can lead to significant unknown posterior approximation errors, particularly for BNP models with heavy tails that support the power-law behavior of the population. Previous work on truncated inference in BNP models has determined the truncation level via analysis of the forward generative model, which does not accurately reflect the error of approximation of the target posterior distribution. This thesis aims to develop approximate inference algorithms that can be directly used for posterior inference for general BNP models. We propose truncated inference methods and provide estimates of the posterior truncation error. Rather than setting the truncation level based on prior approximation error, we establish a desired posterior truncation error level, allowing the algorithm to adapt the truncation level until the desired truncation error is reached. The proposed algorithms are general in that they can be applied to a wide range of BNP models with completely random measure (CRM) priors. We have applied these algorithms to edge-exchangeable network models, where feature assignment variables are observed, and to latent feature models with latent feature assignment variables.
Item Metadata
Title |
Approximate posterior inference for Bayesian nonparametrics with total variation error bounds
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2025
|
Description |
Bayesian nonparametric (BNP) models provide a flexible and powerful framework for statistical modeling by allowing the number of features or subgroups within a population to grow with the data volume. However, posterior inference in BNP models is challenging due to the infinite parameters involved, and the lack of general and efficient inference procedures impedes their practical application. Exact posterior inference methods either analytically marginalize out the infinitely many parameters, or introduce auxiliary variables to adaptively adjust the model size during inference. The former approach relies on conjugacy relationships between priors and likelihoods and suffers from high computational costs. Similarly, the latter approach is also computationally demanding, as it requires numerical integration during sampling for nonconjugate models.
An alternative common practice in fitting BNP models involves approximating the nonparametric model with a parametric one, and subsequently applying a standard inference algorithm. While this is practical, parametric truncation can lead to significant unknown posterior approximation errors, particularly for BNP models with heavy tails that support the power-law behavior of the population. Previous work on truncated inference in BNP models has determined the truncation level via analysis of the forward generative model, which does not accurately reflect the error of approximation of the target posterior distribution. This thesis aims to develop approximate inference algorithms that can be directly used for posterior inference for general BNP models. We propose truncated inference methods and provide estimates of the posterior truncation error. Rather than setting the truncation level based on prior approximation error, we establish a desired posterior truncation error level, allowing the algorithm to adapt the truncation level until the desired truncation error is reached. The proposed algorithms are general in that they can be applied to a wide range of BNP models with completely random measure (CRM) priors. We have applied these algorithms to edge-exchangeable network models, where feature assignment variables are observed, and to latent feature models with latent feature assignment variables.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2025-04-17
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0448446
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2025-05
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International