- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- Iteratively re-weighted lest squares and ADMM methods...
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
Iteratively re-weighted lest squares and ADMM methods for solving affine inclusions Burke, James
Description
We describe two matrix-free methods for solving large-scale affine inclusion problems on the product (or intersection) of convex sets. The first approach is a novel iterative re-weighting algorithm (IRWA) that iteratively minimizes quadratic models of relaxed subproblems while automatically updating a relaxation vector. The second approach is based on alternating direction augmented Lagrangian (ADAL) technology. The main computational costs of each algorithm are the repeated minimizations of convex quadratic functions which can be performed matrix-free. Both algorithms are globally convergent under loose assumptions, and each requires at most $O(1/\varepsilon^2)$ iterations to reach $\varepsilon$-optimality of the objective function. Numerical experiments show that both algorithms efficiently find inexact solutions. However, in certain cases, these experiments indicate that IRWA can be significantly more efficient than ADAL.
Item Metadata
Title |
Iteratively re-weighted lest squares and ADMM methods for solving affine inclusions
|
Creator | |
Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
Date Issued |
2017-09-18T09:38
|
Description |
We describe two matrix-free methods for solving
large-scale affine inclusion problems on the product (or intersection) of convex sets.
The first approach is a novel iterative re-weighting algorithm (IRWA) that iteratively minimizes quadratic models of relaxed subproblems while automatically updating a relaxation vector. The second approach is based on alternating direction augmented Lagrangian (ADAL) technology. The main computational costs of each algorithm are the repeated minimizations of convex quadratic functions which can be performed matrix-free. Both algorithms are globally convergent under loose assumptions, and each requires at most $O(1/\varepsilon^2)$ iterations to reach $\varepsilon$-optimality of the objective function.
Numerical experiments show that both algorithms efficiently find inexact solutions.
However, in certain cases, these experiments indicate that IRWA can be significantly more efficient than ADAL.
|
Extent |
25.0
|
Subject | |
Type | |
File Format |
video/mp4
|
Language |
eng
|
Notes |
Author affiliation: University of Washington
|
Series | |
Date Available |
2019-03-08
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0376691
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Faculty
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International