- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- A Continuum of Optimal Primal-Dual Algorithms for Convex...
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
A Continuum of Optimal Primal-Dual Algorithms for Convex Composite Minimization Problems with Applications to Structured Sparsity Won, Joong-Ho
Description
Many statistical learning problems can be posed as minimization of a sum of two convex functions, one typically a composition of non-smooth and linear functions. Examples include regression under structured sparsity assumptions. Popular algorithms for solving such problems, e.g., ADMM, often involve non-trivial optimization subproblems or smoothing approximation. We consider two classes of primal-dual algorithms that do not incur these difficulties, and unify them from a perspective of monotone operator theory. From this unification we propose a continuum of preconditioned forward-backward operator splitting algorithms amenable to parallel and distributed computing. For the entire region of convergence of the whole continuum of algorithms, we establish its rates of convergence. For some known instances of this continuum, our analysis closes the gap in theory. We further exploit the unification to propose a continuum of accelerated algorithms. We show that the whole continuum attains the theoretically optimal rate of convergence. The scalability of the proposed algorithms, as well as their convergence behavior, is demonstrated up to 1.2 million variables with a distributed implementation.
Item Metadata
Title |
A Continuum of Optimal Primal-Dual Algorithms for Convex Composite Minimization Problems with Applications to Structured Sparsity
|
Creator | |
Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
Date Issued |
2018-05-22T11:07
|
Description |
Many statistical learning problems can be posed as minimization of a
sum of two convex functions, one typically a composition of non-smooth
and linear functions. Examples include regression under structured
sparsity assumptions. Popular algorithms for solving such problems,
e.g., ADMM, often involve non-trivial optimization subproblems or
smoothing approximation. We consider two classes of primal-dual
algorithms that do not incur these difficulties, and unify them from a
perspective of monotone operator theory. From this unification we
propose a continuum of preconditioned forward-backward operator
splitting algorithms amenable to parallel and distributed computing.
For the entire region of convergence of the whole continuum of
algorithms, we establish its rates of convergence. For some known
instances of this continuum, our analysis closes the gap in theory. We
further exploit the unification to propose a continuum of accelerated
algorithms. We show that the whole continuum attains the theoretically
optimal rate of convergence. The scalability of the proposed
algorithms, as well as their convergence behavior, is demonstrated up
to 1.2 million variables with a distributed implementation.
|
Extent |
43.0
|
Subject | |
Type | |
File Format |
video/mp4
|
Language |
eng
|
Notes |
Author affiliation: Seoul National University
|
Series | |
Date Available |
2019-03-20
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0377196
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Researcher
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International