UBC Undergraduate Research

Restricted-dimension subgradient descent : asymptotic bounds on error Sales, Emmanuel


Convex optimization, the study of minimizing convex functions over convex sets, is host to a multitude of methods with far-reaching applications in machine learning. For methods in convex optimization, it is often of interest to analyze the asymptotic bounds of the error of the method, or in other words, how close the result of the method gets to the minimum value after some set period of time. Gradient descent, an iterative procedure that involves taking the gradients of convex functions at a sequence of points, is one of the most important algorithms in the field of convex optimization. Subgradient descent refers to gradient descent that can be applied to functions that need not be smooth. The primary focus of this text is this particular type of gradient descent. This text will explore error estimates of the asymptotic bounds of the final iterate error of subgradient descent; that is, the error between the result of the subgradient descent procedure after a set number of steps T. Prior work has established tight asymptotic error bounds that are independent of the dimension of the underlying set; this work explores the possibility of there existing tighter bounds when the dimension of the underlying set is restricted to a finite constant d that is independent of T. In this work we have proven that in the case of d = 1, the final iterate error has an upper bound of O(1/√T).

Item Citations and Data


Attribution-NonCommercial-NoDerivatives 4.0 International

Usage Statistics