gpp_expected_improvement_demo¶
Contents:
gpp_expected_improvement_demo.cpp¶
moe/optimal_learning/cpp/gpp_expected_improvement_demo.cpp
This is a demo for the Gaussian Process and (optimization of) Expected Improvement capabilities present in this project. These capabilities live in gpp_math.
The layout is:
- Set up input data sizes
- Specify hyperparameters
- Generate (random) set of sampled point locations, noise variances
- Generate data for the Gaussian Process Prior:
- Use a randomly constructed Gaussian Process to generate imaginary objective function values, OR
- Use user-provided input training data.* * By defining OL_USER_INPUTS to 1, you can specify your own input data.
- Select desired concurrent experiment locations (points_being_sampled)
- Construct Gaussian Process to model the training data “world”
- Optimize Expected Improvement to decide what point we would sample next
The random case will be generated by repeatedly drawing from a GP at randomly chosen locations. For real-world use cases (or in the case of user-provided data), we use the GP as a surrogate model. For the random case, the surrogate and “reality” are the same.
Then we run Expected Improvement optimization with our GP to produce the next-best point to sample. This highlights the core ideas in the optimal_learining project: GP construction and usage (as a surrogate model) through expected improvement optimization to produce “good” new samples.
Please read and understand the file comments for gpp_math.hpp (and cpp for developers) before going through this example. It also wouldn’t hurt to look at gpp_covariance.hpp.
DefinesFunctionsOL_USER_INPUTS
int main()