We formulate a distributionally robust optimization problem where the deviation of the alternative distribution is controlled by a ϕ-divergence penalty in the objective, and show that a large class of these problems are essentially equivalent to a mean–variance problem. We also show that while a “small amount of robustness” always reduces the in-sample expected reward, the reduction in the variance, which is a measure of sensitivity to model misspecification, is an order of magnitude larger.
Gotoh, J. ya, Kim, M. J., & Lim, A. E. B. (2018). Robust empirical optimization is almost the same as mean–variance optimization. Operations Research Letters, 46(4), 448–452. https://doi.org/10.1016/j.orl.2018.05.005