- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- BIRS Workshop Lecture Videos /
- Distributionally Robust Contextual Optimization: A...
Open Collections
BIRS Workshop Lecture Videos
BIRS Workshop Lecture Videos
Distributionally Robust Contextual Optimization: A Generative Approach Gao, Rui
Description
We study distributionally robust contextual optimization, where a decision maker seeks a context-dependent policy that minimizes the worst-case expected loss over an ambiguity set of joint distributions of covariates and outcomes. Standard approaches first reformulate the worst-case expectation into a tractable finite-dimensional program and then solve an outer policy-optimization problem, typically over a parametric class (e.g., affine rules or neural nets). However, this workflow often obscures the inherent tractability of many contextual OR models, where non-robust policy optimization reduces to simple conditional optimization. To leverage this feature, we invoke a minimax interchange to recast the robust problem as a maximization over distributions, which we solve via a generative approach. Concretely, we compute a least-favorable distribution using a particle-based first-order method, and then recover the robust policy as an optimal response to this distribution. Focusing on Sinkhorn uncertainty sets, we establish global convergence guarantees in the large-particle regime.
Item Metadata
| Title |
Distributionally Robust Contextual Optimization: A Generative Approach
|
| Creator | |
| Publisher |
Banff International Research Station for Mathematical Innovation and Discovery
|
| Date Issued |
2026-02-24
|
| Description |
We study distributionally robust contextual optimization, where a decision maker seeks a context-dependent policy that minimizes the worst-case expected loss over an ambiguity set of joint distributions of covariates and outcomes. Standard approaches first reformulate the worst-case expectation into a tractable finite-dimensional program and then solve an outer policy-optimization problem, typically over a parametric class (e.g., affine rules or neural nets). However, this workflow often obscures the inherent tractability of many contextual OR models, where non-robust policy optimization reduces to simple conditional optimization. To leverage this feature, we invoke a minimax interchange to recast the robust problem as a maximization over distributions, which we solve via a generative approach. Concretely, we compute a least-favorable distribution using a particle-based first-order method, and then recover the robust policy as an optimal response to this distribution. Focusing on Sinkhorn uncertainty sets, we establish global convergence guarantees in the large-particle regime.
|
| Extent |
27.0 minutes
|
| Subject | |
| Type | |
| File Format |
video/mp4
|
| Language |
eng
|
| Notes |
Author affiliation: University of Texas at Austin
|
| Series | |
| Date Available |
2026-03-02
|
| Provider |
Vancouver : University of British Columbia Library
|
| Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
| DOI |
10.14288/1.0451596
|
| URI | |
| Affiliation | |
| Peer Review Status |
Unreviewed
|
| Scholarly Level |
Researcher
|
| Rights URI | |
| Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International