moe.tests.optimal_learning.python.cpp_wrappers package

Submodules

moe.tests.optimal_learning.python.cpp_wrappers.exception_test module

Tests for the C++-defined Python exception type objects.

class moe.tests.optimal_learning.python.cpp_wrappers.exception_test.TestExceptionStructure[source]

Bases: object

Tests for the C++-defined Python exception type objects.

test_exception_class_hierarchy()[source]

Test that the C++-defined Python exception type objects have the right class hiearchy.

test_exception_thrown_from_cpp()[source]

Test that a C++ interface function throws the expected type.

moe.tests.optimal_learning.python.cpp_wrappers.expected_improvement_test module

Test the C++ implementation of expected improvement against the Python implementation.

class moe.tests.optimal_learning.python.cpp_wrappers.expected_improvement_test.TestExpectedImprovement[source]

Bases: moe.tests.optimal_learning.python.gaussian_process_test_case.GaussianProcessTestCase

Test C++ vs Python implementations of Expected Improvement.

Currently only checks that the 1D, analytic EI & gradient match. Checking monte carlo would be very expensive (b/c of the need to converge the MC) or very difficult (to make python & C++ use the exact same sequence of random numbers).

classmethod base_setup()[source]

Run the standard setup but seed the RNG first (for repeatability).

It is easy to stumble into test cases where EI is very small (e.g., < 1.e-20), which makes it difficult to set meaningful tolerances for the checks.

dim = 3
gp_test_environment_input = <moe.tests.optimal_learning.python.gaussian_process_test_case.GaussianProcessTestEnvironmentInput object at 0x11d6199d0>
noise_variance_base = 0.0002
num_hyperparameters = 4
num_sampled_list = (1, 2, 5, 10, 16, 20, 42, 50)
precompute_gaussian_process_data = True
test_python_and_cpp_return_same_1d_analytic_ei_and_gradient()[source]

Compare the 1D analytic EI/grad EI results from Python & C++, checking several random points per test case.

moe.tests.optimal_learning.python.cpp_wrappers.gaussian_process_test module

Test the C++ implementation of Gaussian Process properties (mean, var, gradients thereof) against the Python version.

class moe.tests.optimal_learning.python.cpp_wrappers.gaussian_process_test.TestGaussianProcess[source]

Bases: moe.tests.optimal_learning.python.gaussian_process_test_case.GaussianProcessTestCase

Test C++ vs Python implementations of Gaussian Process properties (mean, variance, cholesky variance, and their gradients).

classmethod base_setup()[source]

Run the standard setup but seed the RNG first (for repeatability).

It is easy to stumble into test cases where mean, var terms are very small (e.g., < 1.e-20), which makes it difficult to set meaningful tolerances for the checks.

precompute_gaussian_process_data = True
test_gp_add_sampled_points_singular_covariance_matrix()[source]

Test that GaussianProcess.add_sampled_points indicates a singular covariance matrix when points_sampled contains duplicates (0 noise).

test_gp_construction_singular_covariance_matrix()[source]

Test that the GaussianProcess ctor indicates a singular covariance matrix when points_sampled contains duplicates (0 noise).

test_python_and_cpp_return_same_cholesky_variance_and_gradient()[source]

Compare chol_var/grad chol_var results from Python & C++, checking seeral random points per test case.

test_python_and_cpp_return_same_mu_and_gradient()[source]

Compare mu/grad mu results from Python & C++, checking seeral random points per test case.

test_python_and_cpp_return_same_variance_and_gradient()[source]

Compare var/grad var results from Python & C++, checking seeral random points per test case.

test_sample_point_from_gp()[source]

Test that sampling points from the GP works.

moe.tests.optimal_learning.python.cpp_wrappers.log_likelihood_test module

Test cases to check that C++ and Python implementations of moe.optimal_learning.python.interfaces.log_likelihood_interface match.

class moe.tests.optimal_learning.python.cpp_wrappers.log_likelihood_test.TestLogLikelihood[source]

Bases: moe.tests.optimal_learning.python.gaussian_process_test_case.GaussianProcessTestCase

Test that the C++ and Python implementations of the Log Marginal Likelihood match (value and gradient).

dim = 3
gp_test_environment_input = <moe.tests.optimal_learning.python.gaussian_process_test_case.GaussianProcessTestEnvironmentInput object at 0x11d4c4910>
noise_variance_base = 0.0002
num_hyperparameters = 4
num_sampled_list = (1, 2, 5, 10, 16, 20, 42)
precompute_gaussian_process_data = False
test_python_and_cpp_return_same_log_likelihood_and_gradient()[source]

Check that the C++ and Python log likelihood + gradients match over a series of randomly built data sets.

moe.tests.optimal_learning.python.cpp_wrappers.optimization_test module

Tests for the cpp_wrappers.optimization module.

Currently these objects either inherit from Boost Python objects or are very thin data containers. So tests just verify that objects are created & behave as expected. MOE does not yet support the ability to optimize arbitrary Python functions through C++-coded optimizers.

class moe.tests.optimal_learning.python.cpp_wrappers.optimization_test.TestOptimizerParameters[source]

Bases: object

Test that the various optimizer parameter classes (wrapping C++ objects) work.

classmethod base_setup()[source]

Set up dummy parameters for testing optimization parameter structs.

test_gradient_descent_parameters()[source]

Test that GradientDescentParameters is created correctly and comparison works.

test_newton_parameters()[source]

Test that NewtonParameters is created correctly and comparison works.

Module contents

Tests checking that results (e.g., log likelihood, expected improvement, gradients thereof) computed by Python and C++ are the same.

These tests are just meant as an extra check for equivalence since we have two “identical” implementations of optimal_learning. The C++ is independently tested (see moe/optimal_learning/cpp/*test.cpp files) as is the Python* (see moe/tests/optimal_learning/python/python_version), so the tests in this package generally are not very exhaustive.

* The C++ tests are much more extensive than the Python tests, which still need substantial development.