gpp_hyperparameter_optimization_demo

Contents:

gpp_hyperparameter_optimization_demo.cpp

moe/optimal_learning/cpp/gpp_hyperparameter_optimization_demo.cpp

This is a demo for the model selection (via hyperparameter optimization) capability present in this project. These capabilities live in gpp_model_selection.

In gpp_expected_improvement_demo, we choose the hyperparameters arbitrarily. Here, we will walk through an example of how one would select hyperparameters for a given class of covariance function; here, SquareExponential will do. This demo supports:

  1. User-specified training data
  2. Randomly generated training data (more automatic)

More details on the second case:

  1. Choose a set of hyperparameters randomly: source covariance
  2. Build a fake* training set by drawing from a GP with source covariance, at randomly chosen locations * By defining OL_USER_INPUTS to 1, you can specify your own input data.
  3. Choose a new random set of hyperparameters and run hyperparameter optimization
    1. Show log likelihood using the optimized hyperparameters AND the source hyperparameters
    2. observe that with larger training sets, the optimized hyperparameters converge to the source values; but in smaller sets other optima may exist

Further notes about [newton] optimization performance and robustness are spread throughout the demo code, placed near the function call/object construction that they are relevant to.

Please read and understand gpp_expected_improvement_demo.cpp before going through this example. In addition, understanding gpp_model_selection.hpp’s file comments (as well as cpp for devs) is prerequisite.

Defines

OL_USER_INPUTS

Functions

int main()