- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Undergraduate Research /
- Restricted-dimension subgradient descent : asymptotic...
Open Collections
UBC Undergraduate Research
Restricted-dimension subgradient descent : asymptotic bounds on error Sales, Emmanuel
Abstract
Convex optimization, the study of minimizing convex functions over convex sets, is host
to a multitude of methods with far-reaching applications in machine learning. For methods
in convex optimization, it is often of interest to analyze the asymptotic bounds of the error
of the method, or in other words, how close the result of the method gets to the minimum
value after some set period of time.
Gradient descent, an iterative procedure that involves taking the gradients of convex
functions at a sequence of points, is one of the most important algorithms in the field of
convex optimization. Subgradient descent refers to gradient descent that can be applied to
functions that need not be smooth. The primary focus of this text is this particular type
of gradient descent.
This text will explore error estimates of the asymptotic bounds of the final iterate error
of subgradient descent; that is, the error between the result of the subgradient descent
procedure after a set number of steps T. Prior work has established tight asymptotic error
bounds that are independent of the dimension of the underlying set; this work explores
the possibility of there existing tighter bounds when the dimension of the underlying set is
restricted to a finite constant d that is independent of T. In this work we have proven that
in the case of d = 1, the final iterate error has an upper bound of O(1/√T).
Item Metadata
| Title |
Restricted-dimension subgradient descent : asymptotic bounds on error
|
| Creator | |
| Date Issued |
2020-04
|
| Description |
Convex optimization, the study of minimizing convex functions over convex sets, is host
to a multitude of methods with far-reaching applications in machine learning. For methods
in convex optimization, it is often of interest to analyze the asymptotic bounds of the error
of the method, or in other words, how close the result of the method gets to the minimum
value after some set period of time.
Gradient descent, an iterative procedure that involves taking the gradients of convex
functions at a sequence of points, is one of the most important algorithms in the field of
convex optimization. Subgradient descent refers to gradient descent that can be applied to
functions that need not be smooth. The primary focus of this text is this particular type
of gradient descent.
This text will explore error estimates of the asymptotic bounds of the final iterate error
of subgradient descent; that is, the error between the result of the subgradient descent
procedure after a set number of steps T. Prior work has established tight asymptotic error
bounds that are independent of the dimension of the underlying set; this work explores
the possibility of there existing tighter bounds when the dimension of the underlying set is
restricted to a finite constant d that is independent of T. In this work we have proven that
in the case of d = 1, the final iterate error has an upper bound of O(1/√T).
|
| Genre | |
| Type | |
| Language |
eng
|
| Series | |
| Date Available |
2021-06-02
|
| Provider |
Vancouver : University of British Columbia Library
|
| Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
| DOI |
10.14288/1.0398234
|
| URI | |
| Affiliation | |
| Peer Review Status |
Unreviewed
|
| Scholarly Level |
Undergraduate
|
| Rights URI | |
| Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International