- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Interpretable graph distribution representation learning...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Interpretable graph distribution representation learning for graph optimization problems Zhang, Congsong
Abstract
Graph optimization problems (GOPs) are fundamental in both graph theory and artificial intelligence (AI), playing an important role in various real-world applications and motivating researchers to develop approaches to address them. With advancements in machine learning (ML), particularly deep learning, numerous approaches using ML techniques, such as graph neural networks (GNNs), have been developed for GOPs. These methods focus on learning graph representations to approximate solutions, with some directly interpreting these representations as vertex probabilities to generate solutions. However, these learned representations often lack the incorporation of domain-specific human expertise that adheres to the properties of GOPs, reducing interpretability and limiting their effectiveness to generating approximate solutions rather than exact ones. In this thesis, we present an interpretable graph distribution representation learning that integrates domain-specific knowledge adhering to specific GOPs. Additionally, we implement this novel learning with GNN models. The proposed distribution representation consists of two basic components: marginal distributions and copula parameters. Marginal distributions capture the likelihood of potential solution values for GOPs, while copula parameters model the dependencies among these values, ensuring their adherence to the problem's constraints. Building on these learned representations, we propose a heuristic to search for exact solutions of GOPs by extracting rich solution information embedded within the learned distributions. This heuristic computes the Shannon entropy for each search path, prioritizing those with the least uncertainty. By generalizing the search process at a high level, the heuristic becomes broadly applicable to diverse problems while keeping its effectiveness for specific problems through the integration of domain-specific knowledge. In addition, we explore the connection between intersection graph models and GNN models. We prove a phase transition in the diameters of two random intersection graph models, offering valuable insights into the underlying properties of GNN models.
Item Metadata
Title |
Interpretable graph distribution representation learning for graph optimization problems
|
Creator | |
Supervisor | |
Publisher |
University of British Columbia
|
Date Issued |
2025
|
Description |
Graph optimization problems (GOPs) are fundamental in both graph theory and artificial intelligence (AI), playing an important role in various real-world applications and motivating researchers to develop approaches to address them.
With advancements in machine learning (ML), particularly deep learning, numerous approaches using ML techniques, such as graph neural networks (GNNs), have been developed for GOPs. These methods focus on learning graph representations to approximate solutions, with some directly interpreting these representations as vertex probabilities to generate solutions. However, these learned representations often lack the incorporation of domain-specific human expertise that adheres to the properties of GOPs, reducing interpretability and limiting their effectiveness to generating approximate solutions rather than exact ones.
In this thesis, we present an interpretable graph distribution representation learning that integrates domain-specific knowledge adhering to specific GOPs. Additionally, we implement this novel learning with GNN models. The proposed distribution representation consists of two basic components: marginal distributions and copula parameters. Marginal distributions capture the likelihood of potential solution values for GOPs, while copula parameters model the dependencies among these values, ensuring their adherence to the problem's constraints.
Building on these learned representations, we propose a heuristic to search for exact solutions of GOPs by extracting rich solution information embedded within the learned distributions. This heuristic computes the Shannon entropy for each search path, prioritizing those with the least uncertainty. By generalizing the search process at a high level, the heuristic becomes broadly applicable to diverse problems while keeping its effectiveness for specific problems through the integration of domain-specific knowledge.
In addition, we explore the connection between intersection graph models and GNN models. We prove a phase transition in the diameters of two random intersection graph models, offering valuable insights into the underlying properties of GNN models.
|
Genre | |
Type | |
Language |
eng
|
Date Available |
2025-07-07
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0449295
|
URI | |
Degree | |
Program | |
Affiliation | |
Degree Grantor |
University of British Columbia
|
Graduation Date |
2025-09
|
Campus | |
Scholarly Level |
Graduate
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International