 Library Home /
 Search Collections /
 Open Collections /
 Browse Collections /
 UBC Theses and Dissertations /
 Nearoptimal sample complexity for noisy or 1bit tensor...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Nearoptimal sample complexity for noisy or 1bit tensor completion Ghadermarzy, Navid
Abstract
Tensor completion is the problem of recovering a lowrank tensor from a partial subset of its entries. Assuming a rankr, orderd tensor in ℝ^{NxNx...N}, the best sampling complexity achieved is O(rN^{d/2}) which can be obtained by a tensor nuclearnorm minimization problem. This bound is significantly larger than O(rdN), the number of free variables in a rankr tensor. In this thesis, we prove that when r=O(1), we can achieve optimal sample complexity by constraining either one of two proxies for tensor rank, the convex Mnorm or the nonconvex maxqnorm. The maxqnorm is the generalization of matrix maxnorm to tensors which is nonconvex. The Mnorm, on the other hand, is a convex atomic norm whose atoms are rank1 sign tensors. We prove that both maxqnorm and Mnorm of a bounded rankr tensor are bounded by quantities that are independent of N. We also prove that the unit balls of both these norms have small Rademacher complexity. We analyze maxqnorm and Mnorm constrained least squares minimization problems for tensor completion, proving that when r=O(1), m=O(Nd) measurements are sufficient for efficient estimation of the original tensor. We also use an information theoretic technique to prove that the dependence on N is optimal. Moreover, we design an alternating method for approximating the solution of maxqnorm tensor completion and do a thorough investigation of its performance on synthetic and realworld data. We also generalize the 1bit matrix completion problem to higherorder tensors. We prove that when r=O(1) a bounded rankr, orderd tensor T in ℝ^N x ℝ^N x ... x ℝ^N can be estimated efficiently by only m=O(Nd) binary measurements by regularizing either its maxqnorm or its Mnorm. We prove that the sample complexity of recovering a lowrank tensor from 1bit measurements of a subset of its entries is the same as recovering it from unquantized measurements. Moreover, we show the advantage of using 1bit tensor completion over matricization both theoretically and numerically. Specifically, we show how the 1bit measurement model can be used for contextaware recommender systems.
Item Metadata
Title 
Nearoptimal sample complexity for noisy or 1bit tensor completion

Creator  
Publisher 
University of British Columbia

Date Issued 
2018

Description 
Tensor completion is the problem of recovering a lowrank tensor from a partial subset of its entries. Assuming a rankr, orderd tensor in ℝ^{NxNx...N}, the best sampling complexity achieved is O(rN^{d/2}) which can be obtained by a tensor nuclearnorm minimization problem. This bound is significantly larger than O(rdN), the number of free variables in a rankr tensor. In this thesis, we prove that when r=O(1), we can achieve optimal sample complexity by constraining either one of two proxies for tensor rank, the convex Mnorm or the nonconvex maxqnorm. The maxqnorm is the generalization of matrix maxnorm to tensors which is nonconvex. The Mnorm, on the other hand, is a convex atomic norm whose atoms are rank1 sign tensors. We prove that both maxqnorm and Mnorm of a bounded rankr tensor are bounded by quantities that are independent of N. We also prove that the unit balls of both these norms have small Rademacher complexity.
We analyze maxqnorm and Mnorm constrained least squares minimization problems for tensor completion, proving that when r=O(1), m=O(Nd) measurements are sufficient for efficient estimation of the original tensor. We also use an information theoretic technique to prove that the dependence on N is optimal. Moreover, we design an alternating method for approximating the solution of maxqnorm tensor completion and do a thorough investigation of its performance on synthetic and realworld data.
We also generalize the 1bit matrix completion problem to higherorder tensors. We prove that when r=O(1) a bounded rankr, orderd tensor T in ℝ^N x ℝ^N x ... x ℝ^N can be estimated efficiently by only m=O(Nd) binary measurements by regularizing either its maxqnorm or its Mnorm. We prove that the sample complexity of recovering a lowrank tensor from 1bit measurements of a subset of its entries is the same as recovering it from unquantized measurements. Moreover, we show the advantage of using 1bit tensor completion over matricization both theoretically and numerically. Specifically, we show how the 1bit measurement model can be used for contextaware recommender systems.

Genre  
Type  
Language 
eng

Date Available 
20180821

Provider 
Vancouver : University of British Columbia Library

Rights 
AttributionNonCommercialNoDerivatives 4.0 International

DOI 
10.14288/1.0371162

URI  
Degree  
Program  
Affiliation  
Degree Grantor 
University of British Columbia

Graduation Date 
201809

Campus  
Scholarly Level 
Graduate

Rights URI  
Aggregated Source Repository 
DSpace

Item Media
Item Citations and Data
Rights
AttributionNonCommercialNoDerivatives 4.0 International