- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- Calibration of Distributionally Robust Empirical Optimization...
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
Calibration of Distributionally Robust Empirical Optimization Problems Lim, Andrew
Description
We study the out-of-sample properties of robust empirical optimization and develop a theory for data-driven calibration of the ``robustness parameter"" for worst-case maximization problems with concave reward functions. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of ``little bit of robustness"" is a significant reduction in the variance of the out-of-sample reward while the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that a substantial reduction in the variance of the out-of-sample reward (i.e. sensitivity of the expected reward to model misspecification) is possible at little cost if the robustness parameter is properly calibrated. To this end, we introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods like the bootstrap. Our examples show that robust solutions resulting from ``open loop"" calibration methods (e.g. selecting a 90% confidence level regardless of the data and objective function) can be very conservative out-of-sample, while those corresponding to the ambiguity parameter that optimizes an estimate of the out-of-sample expected reward (e.g. via the bootstrap) with no regard for the variance are often insufficiently robust. This is joint work with Jun-ya Gotoh and Michael J. Kim.
Item Metadata
Title |
Calibration of Distributionally Robust Empirical Optimization Problems
|
Creator | |
Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
Date Issued |
2018-03-06T10:57
|
Description |
We study the out-of-sample properties of robust empirical optimization and develop a theory for data-driven calibration of the ``robustness parameter"" for worst-case maximization problems with concave reward functions. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of ``little bit of robustness"" is a significant reduction in the variance of the out-of-sample reward while the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that a substantial reduction in the variance of the out-of-sample reward (i.e. sensitivity of the expected reward to model misspecification) is possible at little cost if the robustness parameter is properly calibrated. To this end, we introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods like the bootstrap. Our examples show that robust solutions resulting from ``open loop"" calibration methods (e.g. selecting a 90% confidence level regardless of the data and objective function) can be very conservative out-of-sample, while those corresponding to the ambiguity parameter that optimizes an estimate of the out-of-sample expected reward (e.g. via the bootstrap) with no regard for the variance are often insufficiently robust.
This is joint work with Jun-ya Gotoh and Michael J. Kim.
|
Extent |
40 minutes
|
Subject | |
Type | |
File Format |
video/mp4
|
Language |
eng
|
Notes |
Author affiliation: National University of Singapore
|
Series | |
Date Available |
2018-09-03
|
Provider |
Vancouver : University of British Columbia Library
|
Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
DOI |
10.14288/1.0371888
|
URI | |
Affiliation | |
Peer Review Status |
Unreviewed
|
Scholarly Level |
Faculty
|
Rights URI | |
Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International