gpp_hyper_and_EI_demo¶
Contents:
gpp_hyper_and_EI_demo.cpp¶
moe/optimal_learning/cpp/gpp_hyper_and_EI_demo.cpp
This demo combines gpp_hyperparameter_optimization_demo.cpp and gpp_expected_improvement_demo.cpp. If you have read and understood those, then this demo should be very straightforward insofar as it is currently almost a direct copy-paste.
The purpose here is to give an “end to end” demo of how someone might use MOE/OL to generate new experimental cohorts, beginning with a set of known experimental cohorts/objective function values, measurement noise, and knowledge of any ongoing experiments.
The basic layout is:
- Set up input data sizes
- Generate random hyperparameters
- Generate (random) set of sampled point locations, noise variances
- Use a randomly constructed (from inputs in steps 1-3) Gaussian Process (generator) to generate imaginary objective function values
- Optimize hyperparameters on the constructed function values
- Select desired concurrent experiment locations (points_being_sampled)
- Construct Gaussian Process (model) to model the training data “world,” using the optimized hyperparameters
- Optimize Expected Improvement to decide what point we would sample next
- Do this once using the optimized hyperparameters
- And again using wrong hyperparameters to emulate a human not knowing how to pick (but drawing from a GP with the same state). To do this, we build another GP (wrong_hyper) using the wrong hyperparameters but the same training data as the model gp
- Compare resulting function values
Steps 1-4 happen in both other demos. Step 5 is the heart of gpp_hyperparameter_optimization_demo.cpp and steps 6-7 are the heart of gpp_expected_improvement_demo.cpp.
Please read and understand the file comments for gpp_expected_improvement_demo.cpp (first) and gpp_hyperparameter_optimization_demo.cpp (second) before going through this demo. The comments are a lot sparser here than in the aforementioned two files to avoid redundancy.
Functionsint main()