Kriging is well-suited to parallelize optimization
Abstract
The optimization of expensive-to-evaluate functions generally relies on metamodel-based exploration strategies. Many deterministic global optimization algorithms used in the field of computer experiments are based on Kriging (Gaussian process regression). Starting with a spatial predictor including a measure of uncertainty, they proceed by iteratively choosing the point maximizing a criterion which is a compromise between predicted performance and uncertainty. Distributing the evaluation of such numerically expensive objective functions on many processors is an appealing idea. Here we investigate a multi-points optimization criterion, the multipoints expected improvement ($q-{\mathbb E}I$), aimed at choosing several points at the same time. An analytical expression of the $q-{\mathbb E}I$ is given when q = 2, and a consistent statistical estimate is given for the general case. We then propose two classes of heuristic strategies meant to approximately optimize the $q-{\mathbb E}I$, and apply them to the classical Branin-Hoo test-case function. It is finally demonstrated within the covered example that the latter strategies perform as good as the best Latin Hypercubes and Uniform Designs ever found by simulation (2000 designs drawn at random for every q ∈ [1,10]).