moe.tests.optimal_learning.python.python_version package

Submodules

moe.tests.optimal_learning.python.python_version.covariance_test module

Test cases for the Square Exponential covariance function and its spatial gradient.

Testing is sparse at the moment. The C++ implementations are tested thoroughly (gpp_covariance_test.hpp/cpp) and we rely more on moe.tests.optimal_learning.python.cpp_wrappers.covariance_test‘s comparison with C++ for verification of the Python code.

TODO(GH-175): Ping testing for spatial gradients and hyperparameter gradients/hessian. TODO(GH-176): Make test structure general enough to support other covariance functions automatically.

class moe.tests.optimal_learning.python.python_version.covariance_test.TestSquareExponential[source]

Bases: moe.tests.optimal_learning.python.optimal_learning_test_case.OptimalLearningTestCase

Tests for the computation of the SquareExponential covariance and spatial gradient of covariance.

Tests cases are against manually verified results in various spatial dimensions and some ping tests.

classmethod base_setup()[source]

Set up parameters for test cases.

test_hyperparameter_gradient_pings()[source]

Ping test (compare analytic result to finite difference) the gradient wrt hyperparameters.

test_square_exponential_covariance_one_dim()[source]

Test the SquareExponential covariance function against correct values for different sets of hyperparameters in 1D.

test_square_exponential_covariance_three_dim()[source]

Test the SquareExponential covariance function against correct values for different sets of hyperparameters in 3D.

test_square_exponential_grad_covariance_three_dim()[source]

Test the SquareExponential grad_covariance function against correct values for different sets of hyperparameters in 3D.

moe.tests.optimal_learning.python.python_version.expected_improvement_test module

Test the Python implementation of Expected Improvement and its gradient.

class moe.tests.optimal_learning.python.python_version.expected_improvement_test.TestExpectedImprovement[source]

Bases: moe.tests.optimal_learning.python.gaussian_process_test_case.GaussianProcessTestCase

Verify that the “naive” and “vectorized” EI implementations in Python return the same result.

The code for the naive implementation of EI is straightforward to read whereas the vectorized version is a lot more opaque. So we verify one against the other.

Fully verifying the monte carlo implemetation (e.g., conducting convergence tests, comparing against analytic results) is expensive and already a part of the C++ unit test suite.

BFGS_parameters = _BaseLBFGSBParameters(approx_grad=True, max_func_evals=150000, max_metric_correc=10, factr=10.0, pgtol=1e-10, epsilon=1e-08)
approx_grad = True
classmethod base_setup()[source]

Run the standard setup but seed the RNG first (for repeatability).

It is easy to stumble into test cases where EI is very small (e.g., < 1.e-20), which makes it difficult to set meaningful tolerances for the checks.

dim = 3
epsilon = 1e-08
factr = 10.0
gp_test_environment_input = <moe.tests.optimal_learning.python.gaussian_process_test_case.GaussianProcessTestEnvironmentInput object at 0x11d63dc50>
max_func_evals = 150000
max_metric_correc = 10
noise_variance_base = 0.002
num_hyperparameters = 4
num_mc_iterations = 747
num_sampled_list = (1, 2, 5, 10, 16, 20, 50)
pgtol = 1e-10
precompute_gaussian_process_data = True
rng_seed = 314
test_1d_analytic_ei_edge_cases()[source]

Test cases where analytic EI would attempt to compute 0/0 without variance lower bounds.

test_evaluate_ei_at_points()[source]

Check that evaluate_expected_improvement_at_point_list computes and orders results correctly (using 1D analytic EI).

test_expected_improvement_and_gradient()[source]

Test EI by comparing the vectorized and “naive” versions.

With the same RNG state, these two functions should return identical output. We use a fairly low number of monte-carlo iterations since we are not trying to converge; just check for consistency.

Note

this is not a particularly good test. It relies on the “naive” version being easier to verify manually and only checks for consistency between the naive and vectorized versions.

test_multistart_analytic_expected_improvement_optimization()[source]

Check that multistart optimization (gradient descent) can find the optimum point to sample (using 1D analytic EI).

test_multistart_monte_carlo_expected_improvement_optimization()[source]

Check that multistart optimization (gradient descent) can find the optimum point to sample (using 2-EI).

test_multistart_qei_expected_improvement_dfo()[source]

Check that multistart optimization (BFGS) can find the optimum point to sample (using 2-EI).

test_qd_and_1d_return_same_analytic_ei()[source]

Compare the 1D analytic EI results to the qD analytic EI results, checking several random points per test case.

test_qd_ei_with_self()[source]

Compare the 1D analytic EI results to the qD analytic EI results, checking several random points per test case.

This test case (unfortunately) suffers from a lot of random variation in the qEI parameters. The tolerance is high because changing the number of iterations or the maximum relative error allowed in the mvndst function leads to different answers.

These precomputed answers were calculated from: maxpts = 200,000 * q releps = 1.0e-14 abseps = 0

These values are a tradeoff between accuracy / speed.

moe.tests.optimal_learning.python.python_version.log_likelihood_test module

Test cases for the Log Marginal Likelihood metric for model fit.

Testing is sparse at the moment. The C++ implementations are tested thoroughly (gpp_covariance_test.hpp/cpp) and we rely more on moe.tests.optimal_learning.python.cpp_wrappers.covariance_test‘s comparison with C++ for verification of the Python code.

class moe.tests.optimal_learning.python.python_version.log_likelihood_test.TestGaussianProcessLogMarginalLikelihood[source]

Bases: moe.tests.optimal_learning.python.gaussian_process_test_case.GaussianProcessTestCase

Test cases for the Log Marginal Likelihood metric for model fit.

Tests check that the gradients ping properly and that computed log likelihood values are < 0.0.

dim = 3
gp_test_environment_input = <moe.tests.optimal_learning.python.gaussian_process_test_case.GaussianProcessTestEnvironmentInput object at 0x11d9b3990>
noise_variance_base = 0.002
num_hyperparameters = 4
num_sampled_list = (1, 2, 5, 10, 16, 20, 42)
precompute_gaussian_process_data = False
test_evaluate_log_likelihood_at_points()[source]

Check that evaluate_log_likelihood_at_hyperparameter_list computes and orders results correctly.

test_grad_log_likelihood_pings()[source]

Ping test (compare analytic result to finite difference) the log likelihood gradient wrt hyperparameters.

test_multistart_hyperparameter_optimization()[source]

Check that multistart optimization (gradient descent) can find the optimum hyperparameters.

moe.tests.optimal_learning.python.python_version.optimization_test module

Tests for the Python optimization module (null, gradient descent, and multistarting) using a simple polynomial objective.

class moe.tests.optimal_learning.python.python_version.optimization_test.QuadraticFunction(maxima_point, current_point)[source]

Bases: moe.optimal_learning.python.interfaces.optimization_interface.OptimizableInterface

Class to evaluate the function f(x_1,...,x_{dim}) = -sum_i (x_i - s_i)^2, i = 1..dim.

This is a simple quadratic form with maxima at (s_1, ..., s_{dim}).

compute_grad_objective_function(**kwargs)[source]

Compute the gradient of f(current_point) wrt current_point.

Returns:gradient of the objective, i-th entry is \pderiv{f(x)}{x_i}
Return type:array of float64 with shape (problem_size)
compute_hessian_objective_function(**kwargs)[source]

Compute the hessian matrix of f(current_point) wrt current_point.

This matrix is symmetric as long as the mixed second derivatives of f(x) are continuous: Clairaut’s Theorem. http://en.wikipedia.org/wiki/Symmetry_of_second_derivatives

Returns:hessian of the objective, (i,j)th entry is \mixpderiv{f(x)}{x_i}{x_j}
Return type:array of float64 with shape (problem_size, problem_size)
compute_objective_function(**kwargs)[source]

Compute f(current_point).

Returns:value of objective function evaluated at current_point
Return type:float64
current_point

Get the current_point (array of float64 with shape (problem_size)) at which this object is evaluating the objective function, f(x).

dim[source]

Return the number of spatial dimensions.

get_current_point()[source]

Get the current_point (array of float64 with shape (problem_size)) at which this object is evaluating the objective function, f(x).

optimum_point[source]

Return the argmax_x (f(x)), the point at which the global maximum occurs.

optimum_value[source]

Return max_x f(x), the global maximum value of this function.

problem_size[source]

Return the number of independent parameters to optimize.

set_current_point(current_point)[source]

Set current_point to the specified point; ordering must match.

Parameters:current_point (array of float64 with shape (problem_size)) – current_point at which to evaluate the objective function, f(x)
class moe.tests.optimal_learning.python.python_version.optimization_test.TestNullOptimizer[source]

Bases: moe.tests.optimal_learning.python.optimal_learning_test_case.OptimalLearningTestCase

Test the NullOptimizer on a simple objective.

NullOptimizer should do nothing. Multistarting it should be the same as a ‘dumb’ search over points.

classmethod base_setup()[source]

Set up a test case for optimizing a simple quadratic polynomial.

test_multistarted_null_optimizer()[source]

Test that multistarting null optimizer just evalutes the function and indentifies the max.

test_null_optimizer()[source]

Test that null optimizer does not change current_point.

class moe.tests.optimal_learning.python.python_version.optimization_test.TestOptimizer[source]

Bases: moe.tests.optimal_learning.python.optimal_learning_test_case.OptimalLearningTestCase

Test the implemented optimizers on a simple quadratic objective.

We check GD in an unconstrained setting, a constrained setting, and we test multistarting it. For the other optimizers we check them in a constrained setting and a multistarted setting.

We don’t test the stochastic averaging option meaningfully. We check that the optimizer will average the number of steps specified by input. We also check that the simple unconstrained case can also be solved with averaging on*.

* This is not much of a test. The problem is convex and isotropic so GD will take a more or less straight path to the maxima. Averaging can only reduce the accuracy of the solve.

TODO(GH-179): Build a simple stochastic objective and test the stochastic component fully.

classmethod base_setup()[source]

Set up a test case for optimizing a simple quadratic polynomial.

multistarted_optimizer_test(optimizer)[source]

Check that the multistarted optimizer can find the optimum in a ‘very’ large domain.

optimizer_test(optimizer, tolerance=2e-13)[source]

Check that the optimizer can find the optimum of the quadratic test objective.

test_bfgs_multistarted_optimizer()[source]

Test if BFGS can optimize a “hard” objective function with multistarts.

test_bfgs_optimizer()[source]

Test if BFGS can optimize a simple objective function.

test_cobyla_multistarted_optimizer()[source]

Test if COBYLA can optimize a “hard” objective function with multistarts.

test_cobyla_optimizer()[source]

Test if COBYLA can optimize a simple objective function.

test_get_averaging_range()[source]

Test the method used to produce what interval to average over in Polyak-Ruppert averaging.

test_gradient_descent_multistarted_optimizer()[source]

Test if Gradient Descent can optimize a “hard” objective function with multistarts.

test_gradient_descent_optimizer()[source]

Test if Gradient Descent can optimize a simple objective function.

test_gradient_descent_optimizer_constrained()[source]

Check that gradient descent can find the global optimum (in a domain) when the true optimum is outside.

test_gradient_descent_optimizer_with_averaging()[source]

Test if Gradient Descent can optimize a simple objective function.

This test doesn’t exercise the purpose of averaging (i.e., this objective isn’t stochastic), but it does check that it at least runs.

test_multistarted_gradient_descent_optimizer_crippled_start()[source]

Check that multistarted GD is finding the best result from GD.

Module contents

Test suite for the Python implementation of optimal_learning.

  • Lower level functions (e.g., covariance) are generally tested with a combination of manual verification and derivative pinging.
  • Mid-level level functions (e.g., log likelihood) are mostly tested with derivative pinging.
  • High-level functions (e.g., optimization of EI or log likelihood) are only loosely tested, only checking that outputs are valid (vs trying to verify them).

Note

the Python implementation is additionally tested against the C++ (same inputs, same results for the various optimal_learning features) implementation (see moe/tests/optimal_learning/python/cpp_wrappers).

TODO(GH-178): in general, the Python test suite is lacking and we rely on comparison against the more extensively tested C++ implementation to check the Python.