UBC Theses and Dissertations

UBC Theses Logo

UBC Theses and Dissertations

On tensor decompositions, maximum likelihood estimation, and causal inference Semnani, Pardis

Abstract

In this thesis, we study three problems, each of which concerns inferring certain pieces of information from some observed data. We apply tools arising from algebraic geometry, statistics, and combinatorics to approach these problems. In Chapter 2, we consider data that can be recorded in the form of a tensor admitting a special type of decomposition called an orthogonal tensor-train decomposition. Finding equations defining varieties of low-rank tensors is generally a hard problem, however, the set of orthogonally-decomposable tensors is defined by appealing quadratic equations. The tensors we consider are an extension of orthogonally-decomposable tensors. We show that they are defined by similar quadratic equations, as well as linear equations and a higher-degree equation. In Chapter 3, we study the problem of maximum likelihood estimation of log-concave densities that lie in the graphical model of a given undirected graph G, and factorize according to this graph with log-concave factors. We show that the maximum likelihood estimate (MLE) is the product of the exponentials of several tent functions, one for each maximal clique of G. While the family of densities in question is infinite-dimensional, our results imply the MLE can be found by solving a finite-dimensional convex optimization problem. We provide an implementation. Furthermore, when G is chordal, we prove that the MLE exists and is unique with probability 1 as long as the number of sample points is larger than the size of the largest clique of G. Finally, we discuss the conditions under which a log-concave density in the graphical model of G has a log-concave factorization according to G. In Chapter 4, we study the problem of inferring causality from an observed i.i.d. sample arising from a distribution faithful to a directed graph G which can possibly have directed cycles. In particular, our goal is to recover the Markov equivalence class of G. We propose an algorithm, and conjecture that it is consistent, i.e., if the set of conditional independence relations satisfied by the distribution is precisely inferred from the observed data, then the output of the algorithm is Markov equivalent to G.

Item Citations and Data

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International