- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- Small-Data, Large-Scale Linear Optimization
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
Small-Data, Large-Scale Linear Optimization Gupta, Vishal
Description
Optimization applications often depend upon a huge number of uncertain parameters. In many contexts, however, the amount of relevant data per parameter is small, and hence, we may have only imprecise estimates. We term this setting -- where the number of uncertainties is large, but all estimates have fixed and low precision -- the ``small-data, large-scale regime."" We formalize a model for this regime, focusing on linear programs with uncertain objective coefficients, and prove that the small-data, large-scale regime is distinct from the traditional large-sample regime. Consequently, methods like sample average approximation, data-driven robust optimization, regularization, and ``estimate-then-optimize"" policies can perform poorly. We propose a novel framework that, given a policy class, identifies an asymptotically best-in-class policy, where the asymptotics hold as the number of uncertain parameters grows large, but the amount of data per uncertainty (and hence the estimate's precision) remains small. We apply our approach to two natural policy classes for this problem: the first inspired by the empirical Bayes literature in statistics and the second by the regularization literature in optimization and machine learning. In both cases, the sub-optimality gap between our proposed method and the best-in-class policy decays exponentially fast in the number of uncertain parameters, even for a fixed amount of data. We also show that in the usual large-sample regime our policies are comparable to the sample average approximation. Thus, our policies retain the strong large-sample performance of traditional methods, and additionally enjoy provably strong performance in the small-data, large-scale regime. Numerical experiments confirm the significant benefits of our methods. Joint work with Prof. Paat Rusmevichientong, USC Marshall.
Item Metadata
Title |
Small-Data, Large-Scale Linear Optimization
|
Creator | |
Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
Date Issued |
2018-03-06T11:38
|
Description |
Optimization applications often depend upon a huge number of uncertain parameters. In many contexts, however, the amount of relevant data per parameter is small, and hence, we may have only imprecise estimates. We term this setting -- where the number of uncertainties is large, but all estimates have fixed and low precision -- the ``small-data, large-scale regime."" We formalize a model for this regime, focusing on linear programs with uncertain objective coefficients, and prove that the small-data, large-scale regime is distinct from the traditional large-sample regime. Consequently, methods like sample average approximation, data-driven robust optimization, regularization, and ``estimate-then-optimize"" policies can perform poorly. We propose a novel framework that, given a policy class, identifies an asymptotically best-in-class policy, where the asymptotics hold as the number of uncertain parameters grows large, but the amount of data per uncertainty (and hence the estimate's precision) remains small. We apply our approach to two natural policy classes for this problem: the first inspired by the empirical Bayes literature in statistics and the second by the regularization literature in optimization and machine learning. In both cases, the sub-optimality gap between our proposed method and the best-in-class policy decays exponentially fast in the number of uncertain parameters, even for a fixed amount of data. We also show that in the usual large-sample regime our policies are comparable to the sample average approximation. Thus, our policies retain the strong large-sample performance of traditional methods, and additionally enjoy provably strong performance in the small-data, large-scale regime. Numerical experiments confirm the significant benefits of our methods.
Joint work with Prof. Paat Rusmevichientong, USC Marshall.
|
Extent |
39 minutes
|
Subject | |
Type | |
File Format |
video/mp4
|
Language |
eng
|
Notes |
Author affiliation: Marshall School of Business - University of Southern California
|
Series | |
Date Available |
2018-09-03
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0371889
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Faculty
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International