UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

Representation learning with explicit and implicit graph structures Fatemi, Bahare

Abstract

The world around us is composed of objects each having relations with other objects. The objects and relations form a (hyper)graph with objects as nodes and relations between objects as (hyper)edges. When learning, the underlying structure representing the relations between the nodes is either given explicitly in the training set or is implicit and needs to be inferred. This dissertation studies graph representation learning with both explicit and implicit structures. For explicit structure, we first tackle the challenge of enforcing taxonomic information while embedding entities and relations. We prove that some fully expressive models cannot respect subclass and subproperty information. With minimal modifications to an existing knowledge graph completion method, we enable the injection of taxonomic information. A second challenge is in representing explicit structures in relational hypergraphs that contain relations defined on an arbitrary number of entities. While techniques, such as reification, exist that convert non-binary relations into binary ones, we show that current embedding-based methods do not work well out of the box for knowledge graphs obtained through these techniques. We introduce embedding-based methods that work directly with relations of arbitrary arity. We also develop public datasets, benchmarks, and baselines and show experimentally that the proposed models are more effective than the baselines. We further bridge the gap between relational algebra and knowledge hypergraphs by proposing an embedding-based model that can represent relational algebra operations. Having introduced novel architectures for explicitly graph-structured data, we further investigate how models with relational inductive biases can be developed and applied to problems with implicit structures. Graph representation learning models work well when the structure is explicit. However, this structure may not always be available in real-world applications. We propose the Simultaneous Learning of Adjacency and graph neural network Parameters with Self-supervision, or SLAPS, a method that provides more supervision for inferring a graph structure through self-supervision. An experimental study demonstrates that SLAPS scales to large graphs with hundreds of thousands of nodes and outperforms several baselines on established benchmarks.

Item Media

Item Citations and Data

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International