gpp_model_selection_test¶
Contents:
gpp_model_selection_test.hpp¶
Functions for testing gpp_model_selection’s functionality–the evaluation of LogMarginalLikelihood and LeaveOneOutLogLikelihood (plus gradient, hessian) and the optimization of these metrics wrt hyperparameters of the covariance function.
These will be abbreviated as:
- LML = LogMarginalLikelihood
- LOO-CV = Leave One Out Cross Validation
As in gpp_math_test, we have two main groups of tests:
- ping (unit) tests for gradient/hessian of LML and gradient of LOO-CV.
- unit + integration tests for optimization methods (gradient descent, newton)
The ping tests are set up the same way as the ping tests in gpp_covariance_test; using the function evaluator and ping framework defined in gpp_test_utils.
Finally, we have integration tests for LML and LOO-CV optimization. Unit tests for optimizers live in gpp_optimization_test.hpp/cpp. These integration tests use constructed data but exercise all the same code paths used for hyperparameter optimization in production.
namespace optimal_learning
Macro to allow restrict as a keyword for C++ compilation and CUDA/nvcc compilation. See related entry in gpp_common.hpp for more details.
gpp_model_selection_test.cpp¶
Routines to test the functions in gpp_model_selection.cpp.
These tests verify LogMarginalLikelihoodEvaluator and LeaveOneOutLogLikelihoodEvaluator and their optimizers:
- Ping testing (verifying analytic gradient computation against finite difference approximations)
- Following gpp_covariance_test.cpp, we define classes (PingLogLikelihood, PingHessianLogLikelihood) for evaluating log likelihood + gradient or log likelihood gradient + hessian (derivs wrt hyperparameters).
- Ping for derivative accuracy (PingLogLikelihoodTest, which is general enough for gradients and hessian); this is for derivatives wrt hyperparameters. These are for unit testing analytic derivatives.
- Gradient Descent + Newton unit tests: using polynomials and other simple fucntions with analytically known optima to verify that the optimizers are performing correctly.
- Hyperparameter optimization: we run hyperparameter optimization on toy problems using LML and LOO-CV likelihood as objective functions. Convergence to at least local maxima is verified for both gradient descent and newton optimizers. These function as integration tests.
int num_hyperparameters_
number of hyperparameters of the underlying covariance function
bool gradients_already_computed_
whether gradients been computed and storedwhether this class is ready for use
LogLikelihoodEvaluator log_likelihood_eval_
log likelihood evaluator that is being tested (e.g., LogMarginalLikelihood, LeaveOneOutLogLikelihood)
std::vector< double > grad_log_marginal_likelihood_
the gradient of the log marginal measure wrt hyperparameters of covariance
std::vector< double > hessian_log_marginal_likelihood_
the hessian of the log marginal measure wrt hyperparameters of covariance
namespace optimal_learning
Macro to allow restrict as a keyword for C++ compilation and CUDA/nvcc compilation. See related entry in gpp_common.hpp for more details.
Functionsint RunLogLikelihoodPingTests()Runs a battery of ping tests for the Log Likelihood Evaluators:
- Log Marginal: gradient and hessian wrt hyperparameters
- Leave One Out: gradient wrt hyperparameters
- Returns:
- number of test failures: 0 if all is working well.
int HyperparameterLikelihoodOptimizationTest(OptimizerTypes optimizer_type, LogLikelihoodTypes objective_mode)Checks that hyperparameter optimization is working for the selected combination of OptimizerTypes (gradient descent, newton) and LogLikelihoodTypes (log marginal likelihood, leave-one-out cross-validation log pseudo-likelihood).
Note
newton and leave-one-out is not implemented.
- Parameters:
optimizer_type: which optimizer to use objective_mode: which log likelihood measure to use - Returns:
- number of test failures: 0 if hyperparameter optimization (based on marginal likelihood) is working properly
int EvaluateLogLikelihoodAtPointListTest()Tests EvaluateLogLikelihoodAtPointList (computes log likelihood at a specified list of hyperparameters, multithreaded). Checks that the returned best point is in fact the best. Verifies multithreaded consistency.
- Returns:
- number of test failures: 0 if function evaluation is working properly