We study the out-of-sample properties of robust empirical optimization problems with smooth φ-divergence penalties and smooth concave objective functions, and we develop a theory for data-driven calibration of the nonnegative “robustness parameter” δ that controls the size of the deviations from the nominal model. Building on the intuition that robust optimization reduces the sensitivity of the expected reward to errors in the model by controlling the spread of the reward distribution, we show that the first-order benefit of “little bit of robustness” (i.e., δ small, positive) is a significant reduction in the variance of the out-of-sample reward, whereas the corresponding impact on the mean is almost an order of magnitude smaller. One implication is that substantial variance (sensitivity) reduction is possible at little cost if the robustness parameter is properly calibrated. To this end, we introduce the notion of a robust mean-variance frontier to select the robustness parameter and show that it can be approximated using resampling methods such as the bootstrap. Our examples show that robust solutions resulting from “open-loop” calibration methods (e.g., selecting a 90% confidence level regardless of the data and objective function) can be very conservative out of sample, whereas those corresponding to the robustness parameter that optimizes an estimate of the out-of-sample expected reward (e.g., via the bootstrap) with no regard for the variance are often insufficiently robust.
CITATION STYLE
Gotoh, J. Y., Kim, M. J., & Lim, A. E. B. (2021). Calibration of distributionally robust empirical optimization models. Operations Research, 69(5), 1630–1650. https://doi.org/10.1287/opre.2020.2041
Mendeley helps you to discover research relevant for your work.