gpp_heuristic_expected_improvement_optimization_test

Contents:

gpp_heuristic_expected_improvement_optimization_test.hpp

Functions for testing gpp_heuristic_expected_improvement_optimization.cpp’s functionality. These tests are a combination of unit and integration tests for heuristic optimization methods for expected improvement (e.g., Constant Liar, Kriging Believer).

These heuristic methods are fairly simple compared to their optimal counterparts in gpp_math, so the tests generally validate output consistency and any relevant intermediate assumptions.

namespace optimal_learning

Macro to allow restrict as a keyword for C++ compilation and CUDA/nvcc compilation. See related entry in gpp_common.hpp for more details.

gpp_heuristic_expected_improvement_optimization_test.cpp

Routines to test the functions in gpp_heuristic_expected_improvement_optimization.cpp. The tests verify the subclasses of ObjectiveEstimationPolicyInterface and the correctness of ComputeHeuristicPointsToSample():

  1. ObjectiveEstimationPolicyInterface
    1. ConstantLiarEstimationPolicy Verify that constant liar gives back the same, constant output regardless of inputs (e.g., test against invalid inputs).
    2. KrigingBelieverEstimationPolicy Kriging’s output depends on GP computations (mean, variance). In some special cases, we know these quantities analytically, so we test that Kriging gives the expected output in those cases.
  2. ComputeHeuristicPointsToSample We have an end-to-end test of this functionality using both ConstantLiar and KrigingBeliever. We check that the output is valid (e.g., in the domain, distinct) and that the points correspond to local optima (i.e., each round of solving 1-EI succeeded).

namespace optimal_learning

Macro to allow restrict as a keyword for C++ compilation and CUDA/nvcc compilation. See related entry in gpp_common.hpp for more details.

Functions

int EstimationPolicyTest()

Checks that the subclasses of ObjectiveEstimationPolicyInterface declared in gpp_heuristic_expected_improvement_optimization.hpp are working correctly. Right now, these are:

  1. Constant Liar
  2. Kriging Believer

We set up contrived environments where the outputs of these policies is known exactly.

Returns:
number of test failures: 0 if estimation policies are working properly

int HeuristicExpectedImprovementOptimizationTest()

Checks that ComputeHeuristicPointsToSample() works on a tensor product domain using both ConstantLiarEstimationPolicy and KrigingBelieverEstimationPolicy estimation policies. This test assumes the the code tested in:

  1. ExpectedImprovementOptimizationTest(DomainTypes::kTensorProduct, ExpectedImprovementEvaluationMode::kAnalytic)
  2. EstimationPolicyTest()

is working.

This test checks the generation of multiple, simultaneous experimental points to sample using various objective function estimation heuristics; i.e., no monte-carlo needed.

Returns:
number of test failures: 0 if heuristic EI optimization is working properly