Abstract
Of central importance to the αBB algorithm is the calculation of the α values that guarantee the convexity of the underestimator. Improvement (reduction) of these values can result in tighter underestimators and thus increase the performance of the algorithm. For instance, it was shown by Wechsung et al. (J Glob Optim 58(3):429–438, 2014) that the emergence of the cluster effect can depend on the magnitude of the α values. Motivated by this, we present a refinement method that can improve (reduce) the magnitude of α values given by the scaled Gerschgorin method and thus create tighter convex underestimators for the αBB algorithm. We apply the new method and compare it with the scaled Gerschgorin on randomly generated interval symmetric matrices as well as interval Hessians taken from test functions. As a measure of comparison, we use the maximal separation distance between the original function and the underestimator. Based on the results obtained, we conclude that the proposed refinement method can significantly reduce the maximal separation distance when compared to the scaled Gerschgorin method. This approach therefore has the potential to improve the performance of the αBB algorithm.
Author supplied keywords
Cite
CITATION STYLE
Nerantzis, D., & Adjiman, C. S. (2019). Tighter α BB relaxations through a refinement scheme for the scaled Gerschgorin theorem. Journal of Global Optimization, 73(3), 467–483. https://doi.org/10.1007/s10898-018-0718-y
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.