moe.optimal_learning.python package

Subpackages

Submodules

moe.optimal_learning.python.comparison module

Comparison mixins to help devs generate comparison operations for their classes.

Consider combining with tools like functools.total_ordering: https://docs.python.org/2/library/functools.html#functools.total_ordering to fill out additional comparison functionality.

class moe.optimal_learning.python.comparison.EqualityComparisonMixin[source]

Bases: object

Mixin class to autogenerate __eq__ (from instance members), __ne__, and __repr__ and disable __hash__.

Adds no names to the class’s public namespace (names without pre/postceding underscores).

Sources: http://stackoverflow.com/questions/390250/elegant-ways-to-support-equivalence-equality-in-python-classes http://stackoverflow.com/questions/9058305/getting-attributes-of-a-class

Be careful with NaN! This object uses dict comparison (which amounts to sorted tuple comparison). Sorted tuple comparison in Python IS NOT calling __eq__ on every test pair.

In particular, Python short-circuits comparison when object ids are the same. Identity is equality. However, NaN makes it impossible to be consistent here, since NaN != NaN by definition.

So:

>> a = {'k': float('nan')}  # Note: float('nan') is not interned (no floats are)
>> float('nan') == float('nan')  # False: by definition in IEEE754
>> a == a  # True: short-circuits b/c both floats have the same object id
>> a == copy.deepcopy(a)  # True: WHOA!

WHOA: You might think deepcopy would produce different object ids, but it doesn’t. Instead of interning, float have a short cache for already-created objects (the “free-list”) so a new float is NOT created. This is generally not crippling because:

>> b = copy.deepcopy(a)
>> b['k'] = float('nan')  # Force a new float object
>> a == b  # False

Note: numpy.nan is a Python float and has the same issue. numpy.float64(numpy.nan) however is a numpy.float64 which does not appear to undergo any kind of caching/interning.

moe.optimal_learning.python.constant module

Some default configuration parameters for optimal_learning components.

moe.optimal_learning.python.constant.CONSTANT_LIAR_METHODS = ['constant_liar_min', 'constant_liar_max', 'constant_liar_mean']

Pre-defined constant liar “lie” methods supported by moe

moe.optimal_learning.python.constant.COVARIANCE_TYPES = ['square_exponential']

Covariance types supported by moe

moe.optimal_learning.python.constant.DEFAULT_MAX_NUM_THREADS = 4

Default number of threads to use in computation

moe.optimal_learning.python.constant.DOMAIN_TYPES = ['tensor_product', 'simplex_intersect_tensor_product']

Domain types supported by moe

class moe.optimal_learning.python.constant.DefaultOptimizerInfoTuple[source]

Bases: moe.optimal_learning.python.constant._BaseDefaultOptimizerInfoTuple

Container holding default values to use with a moe.views.schemas.OptimizerInfo.

Variables:
moe.optimal_learning.python.constant.ENDPOINT_TO_DEFAULT_OPTIMIZER_TYPE = {'gp_next_points_kriging': 'gradient_descent_optimizer', ('gp_hyper_opt', 'leave_one_out_log_likelihood'): 'gradient_descent_optimizer', ('gp_next_points_epi', 'multi_point_ei'): 'l_bfgs_b_optimizer', 'gp_next_points_constant_liar': 'gradient_descent_optimizer', ('gp_hyper_opt', 'log_marginal_likelihood'): 'newton_optimizer', ('gp_next_points_epi', 'single_point_ei'): 'gradient_descent_optimizer'}

dict mapping from tuples describing endpoints and objective functions to optimizer type strings; i.e., one of moe.optimal_learning.python.constant.OPTIMIZER_TYPES.

class moe.optimal_learning.python.constant.GaussianProcessParameters

Bases: tuple

GaussianProcessParameters(length_scale, signal_variance)

length_scale

Alias for field number 0

signal_variance

Alias for field number 1

moe.optimal_learning.python.constant.LIKELIHOOD_TYPES = ['leave_one_out_log_likelihood', 'log_marginal_likelihood']

Log Likelihood types supported by moe

moe.optimal_learning.python.constant.MAX_ALLOWED_NUM_THREADS = 10000

Maximum number of threads that a user can specify TODO(GH-301): make this a server configurable value or set appropriate openmp env var

moe.optimal_learning.python.constant.OPTIMIZER_TYPES = ['null_optimizer', 'newton_optimizer', 'gradient_descent_optimizer', 'l_bfgs_b_optimizer']

Optimizer types supported by moe

moe.optimal_learning.python.constant.OPTIMIZER_TYPE_AND_OBJECTIVE_TO_DEFAULT_PARAMETERS = {('gradient_descent_optimizer', 'gp_next_points_epi', 'ei_analytic'): _BaseDefaultOptimizerInfoTuple(num_multistarts=600, num_random_samples=50000, optimizer_parameters=_BaseGradientDescentParameters(max_num_steps=500, max_num_restarts=4, num_steps_averaged=0, gamma=0.6, pre_mult=1.0, max_relative_change=1.0, tolerance=1e-07)), ('null_optimizer', 'gp_next_points_epi', 'ei_monte_carlo'): _BaseDefaultOptimizerInfoTuple(num_multistarts=1, num_random_samples=50000, optimizer_parameters=NullParameters()), ('null_optimizer', 'gp_hyper_opt', 'log_marginal_likelihood'): _BaseDefaultOptimizerInfoTuple(num_multistarts=1, num_random_samples=300000, optimizer_parameters=NullParameters()), ('null_optimizer', 'gp_next_points_epi', 'ei_analytic'): _BaseDefaultOptimizerInfoTuple(num_multistarts=1, num_random_samples=500000, optimizer_parameters=NullParameters()), ('null_optimizer', 'gp_next_points_kriging'): _BaseDefaultOptimizerInfoTuple(num_multistarts=1, num_random_samples=500000, optimizer_parameters=NullParameters()), ('null_optimizer', 'gp_next_points_constant_liar'): _BaseDefaultOptimizerInfoTuple(num_multistarts=1, num_random_samples=500000, optimizer_parameters=NullParameters()), ('null_optimizer', 'gp_hyper_opt', 'leave_one_out_log_likelihood'): _BaseDefaultOptimizerInfoTuple(num_multistarts=1, num_random_samples=300000, optimizer_parameters=NullParameters()), ('newton_optimizer', 'gp_hyper_opt', 'log_marginal_likelihood'): _BaseDefaultOptimizerInfoTuple(num_multistarts=200, num_random_samples=0, optimizer_parameters=_BaseNewtonParameters(max_num_steps=150, gamma=1.2, time_factor=0.0005, max_relative_change=1.0, tolerance=1e-09)), ('gradient_descent_optimizer', 'gp_hyper_opt', 'leave_one_out_log_likelihood'): _BaseDefaultOptimizerInfoTuple(num_multistarts=400, num_random_samples=0, optimizer_parameters=_BaseGradientDescentParameters(max_num_steps=600, max_num_restarts=10, num_steps_averaged=0, gamma=0.9, pre_mult=0.25, max_relative_change=0.2, tolerance=1e-05)), ('gradient_descent_optimizer', 'gp_next_points_kriging'): _BaseDefaultOptimizerInfoTuple(num_multistarts=600, num_random_samples=50000, optimizer_parameters=_BaseGradientDescentParameters(max_num_steps=500, max_num_restarts=4, num_steps_averaged=0, gamma=0.6, pre_mult=1.0, max_relative_change=1.0, tolerance=1e-07)), ('l_bfgs_b_optimizer', 'gp_next_points_epi', 'ei_analytic'): _BaseDefaultOptimizerInfoTuple(num_multistarts=200, num_random_samples=4000, optimizer_parameters=_BaseLBFGSBParameters(approx_grad=True, max_func_evals=15000, max_metric_correc=10, factr=10000000.0, pgtol=1e-05, epsilon=1e-08)), ('gradient_descent_optimizer', 'gp_hyper_opt', 'log_marginal_likelihood'): _BaseDefaultOptimizerInfoTuple(num_multistarts=400, num_random_samples=0, optimizer_parameters=_BaseGradientDescentParameters(max_num_steps=600, max_num_restarts=10, num_steps_averaged=0, gamma=0.9, pre_mult=0.25, max_relative_change=0.2, tolerance=1e-05)), ('gradient_descent_optimizer', 'gp_next_points_constant_liar'): _BaseDefaultOptimizerInfoTuple(num_multistarts=600, num_random_samples=50000, optimizer_parameters=_BaseGradientDescentParameters(max_num_steps=500, max_num_restarts=4, num_steps_averaged=0, gamma=0.6, pre_mult=1.0, max_relative_change=1.0, tolerance=1e-07)), ('gradient_descent_optimizer', 'gp_next_points_epi', 'ei_monte_carlo'): _BaseDefaultOptimizerInfoTuple(num_multistarts=200, num_random_samples=4000, optimizer_parameters=_BaseGradientDescentParameters(max_num_steps=500, max_num_restarts=4, num_steps_averaged=100, gamma=0.6, pre_mult=1.0, max_relative_change=1.0, tolerance=1e-05))}

dict mapping from tuples of optimizer type, endpoint, etc. to default optimizer parameters. The default parameter structs are of type moe.optimal_learning.python.constant.DefaultOptimizerInfoTuple and the actual default parameters are defined in moe.optimal_learning.python.constant. Note: (NEWTON_OPTIMIZER, views_constant.GP_HYPER_OPT_ROUTE_NAME, LEAVE_ONE_OUT_LOG_LIKELIHOOD) does not have an entry because this combination is not yet implemented. Newton is also not implemented for any of the GP_NEXT_POINTS_* endpoints.

moe.optimal_learning.python.data_containers module

Data containers convenient for/used to interact with optimal_learning.python members.

class moe.optimal_learning.python.data_containers.HistoricalData(dim, sample_points=None, validate=False)[source]

Bases: object

A data container for storing the historical data from an entire experiment in a layout convenient for this library.

Users will likely find it most convenient to store experiment historical data in tuples of (coordinates, value, noise); for example, these could be the columns of a database row, part of an ORM, etc. The moe.optimal_learning.python.SamplePoint class (above) provides a convenient representation of this input format, but users are not required to use it.

But the internals of optimal_learning will generally do computations on all coordinates at once, all values at once, and/or all noise measurements at once. So this object reads the input data and “transposes” the ordering so that we have a matrix of coordinates and vectors of values and noises. Compared to storing a list of moe.optimal_learning.python.SamplePoint, these internals save on redundant data transformations and improve locality.

Note that the points in HistoricalData are not associated to any particular domain. HistoricalData could be (and is) used for model selection as well as Gaussian Process manipulation, Expected Improvement optimization, etc. In the former, the point-domain has no meaning (as opposed to the hyperparameter domain). In the latter, users could perform multiple optimization runs with slightly different domains (e.g., differing levels of exploration) without changing HistoricalData. Users may also optimize within a subdomain of the points already sampled. Thus, we are not including domain in HistoricalData so as to place no restriction on how users can use optimal_learning and think about their experiments.

Variables:
  • _points_sampled – (array of float64 with shape (self.num_sampled, self.dim)) already-sampled points
  • _points_sampled_value – (array of float64 with shape (self.num_sampled)) function value measured at each point
  • _points_sampled_noise_variance – (array of float64 with shape (self.num_sampled)) noise variance associated with points_sampled_value
append_historical_data(points_sampled, points_sampled_value, points_sampled_noise_variance, validate=False)[source]

Append lists of points_sampled, their values, and their noise variances to the data members of this class.

This class (see class docstring) stores its data members as numpy arrays; this method provides a way for users who already have data in this format to append directly instead of creating an intermediate moe.optimal_learning.python.SamplePoint list.

Parameters:
  • points_sampled (array of float64 with shape (num_sampled, dim)) – already-sampled points
  • points_sampled_value (array of float64 with shape (num_sampled)) – function value measured at each point
  • points_sampled_noise_variance (array of float64 with shape (num_sampled)) – noise variance associated with points_sampled_value
  • validate (boolean) – whether to sanity-check the input sample_points
append_sample_points(sample_points, validate=False)[source]

Append the contents of sample_points to the data members of this class.

Parameters:
  • sample_points (iterable of iterables with the same structure as a list of moe.optimal_learning.python.SamplePoint) – the already-sampled points: coordinates, objective function values, and noise variance
  • validate (boolean) – whether to sanity-check the input sample_points
dim[source]

Return the number of spatial dimensions of a point in self.points_sampled.

json_payload()[source]

Construct a json serializeable and MOE REST recognizeable dictionary of the historical data.

num_sampled[source]

Return the number of sampled points.

points_sampled[source]

Return the coordinates of the points_sampled, array of float64 with shape (self.num_sampled, self.dim).

points_sampled_noise_variance[source]

Return the noise variances associated with function values measured at each of self.points_sampled, array of floa664 with shape (self.num_sampled).

points_sampled_value[source]

Return the objective function values measured at each of self.points_sampled, array of floa664 with shape (self.num_sampled).

to_list_of_sample_points()[source]

Convert this HistoricalData into a list of SamplePoint.

The list of SamplePoint format is more convenient for human consumption/introspection.

Returns:list where i-th SamplePoint has data from the i-th entry of each self.points_sampled* member.
Return type:list of moe.optimal_learning.python.SamplePoint
static validate_historical_data(dim, points_sampled, points_sampled_value, points_sampled_noise_variance)[source]

Check that the historical data components (dim, coordinates, values, noises) are consistent in dimension and all have finite values.

Parameters:
  • dim (int > 0) – number of (expected) spatial dimensions
  • points_sampled (array of float64 with shape (num_sampled, dim)) – already-sampled points
  • points_sampled_value (array of float64 with shape (num_sampled)) – function value measured at each point
  • points_sampled_noise_variance (array of float64 with shape (num_sampled)) – noise variance associated with points_sampled_value
Returns:

True if inputs are valid

Return type:

boolean

static validate_sample_points(dim, sample_points)[source]

Check that sample_points passes basic validity checks: dimension is the same, all values are finite.

Parameters:
  • dim (int > 0) – number of (expected) spatial dimensions
  • sample_points (iterable of iterables with the same structure as a list of moe.optimal_learning.python.SamplePoint) – the already-sampled points: coordinates, objective function values, and noise variance
Returns:

True if inputs are valid

Return type:

boolean

class moe.optimal_learning.python.data_containers.SamplePoint[source]

Bases: moe.optimal_learning.python.data_containers._BaseSamplePoint

A point (coordinates, function value, noise variance) sampled from the objective function we are modeling/optimizing.

This class is a representation of a “Sample Point,” which is defined by the three data members listed here. SamplePoint is a convenient way of communicating data to the rest of the optimal_learning library (via the HistoricalData container); it also provides a convenient grouping for interactive introspection.

Users are not required to use SamplePoint–iterables with the same data layout will suffice.

Variables:
  • point – (iterable of dim float64) The point sampled (in the domain of the function)
  • value – (float64) The value returned by the function
  • noise_variance – (float64 >= 0.0) The noise/measurement variance (if any) associated with :attr`value`
json_payload()[source]

Convert the sample_point into a dict to be consumed by json for a REST request.

validate(dim=None)[source]

Check this SamplePoint passes basic validity checks: dimension is expected, all values are finite.

Parameters:

dim (int > 0) – number of (expected) spatial dimensions; None to skip check

Raises:
  • ValueError – self.point does not have exactly dim entries
  • ValueError – if any member data is non-finite or out of range

moe.optimal_learning.python.geometry_utils module

Geometry utilities. e.g., ClosedInterval, point-plane geometry, random point generation.

class moe.optimal_learning.python.geometry_utils.ClosedInterval[source]

Bases: moe.optimal_learning.python.geometry_utils.ClosedInterval

Container to represent the mathematical notion of a closed interval, commonly written ms [a,b]me.

The closed interval ms [a,b]me is the set of all numbers ms x in mathbb{R}me such that ms a leq x leq bme. Note that “closed” here indicates the interval includes both endpoints. An interval with ms a > bme is considered empty.

Variables:
  • min – (float64) the “left” bound of the domain, a
  • max – (float64) the “right” bound of the domain, b
static build_closed_intervals_from_list(bounds_list)[source]

Construct a list of dim ClosedInterval from an iterable structure of dim iterables with len = 2.

For example, [[1, 2], [3, 4]] becomes [ClosedInterval(min=1, max=2), ClosedInterval(min=3, max=4)].

Parameters:bounds_list (iterable of iterables, where the second dimension has len = 2) – bounds to convert
Returns:bounds_list converted to list of ClosedInterval
Return type:list of ClosedInterval
is_empty()[source]

Check whether this ClosedInterval is the emptyset: max < min.

is_inside(value)[source]

Check if a value is inside this ClosedInterval.

length[source]

Compute the length of this ClosedInterval.

moe.optimal_learning.python.geometry_utils.generate_grid_points(points_per_dimension, domain_bounds)[source]

Generate a uniform grid of points on a tensor product region; exponential runtime.

This can be useful for producing a reasonable set of initial samples when bootstrapping optimal_learning. Grid sampling (as opposed to a random sampling, e.g., latin hypercube) is not random. It also guarantees sampling of the domain corners.

Note

This operation is like an outer-product, so 4 points per dimension in 10 dimensions produces 4^{10} points. This could be built as an iterator instead, but the typical use case involves function evaluations at every point, so generating the points is not the limiting factor.

Parameters:
  • points_per_dimension (tuple or scalar) – (n_1, n_2, ... n_{dim}) number of stencil points per spatial dimension. If len(points_per_dimension) == 1, then n_i = len(points_per_dimension)
  • domain_bounds (iterable of dim ClosedInterval) – the boundaries of a dim-dimensional tensor-product domain
Returns:

stencil point coordinates

Return type:

array of float64 with shape (Pi_i n_i, dim)

moe.optimal_learning.python.geometry_utils.generate_latin_hypercube_points(num_points, domain_bounds)[source]

Compute a set of random points inside some domain that lie in a latin hypercube.

In 2D, a latin hypercube is a latin square–a checkerboard–such that there is exactly one sample in each row and each column. This notion is generalized for higher dimensions where each dimensional ‘slice’ has precisely one sample.

See wikipedia: http://en.wikipedia.org/wiki/Latin_hypercube_sampling for more details on the latin hypercube sampling process.

Parameters:
  • num_points (int > 0) – number of random points to generate
  • domain_bounds (list of dim ClosedInterval) – [min, max] boundaries of the hypercube in each dimension
Returns:

uniformly distributed random points inside the specified hypercube

Return type:

array of float64 with shape (num_points, dim)

moe.optimal_learning.python.linkers module

Links between the python and cpp_wrapper implementations of domains, covariances and optimizations.

Bases: tuple

CovarianceLinks(python_covariance_class, cpp_covariance_class)

cpp_covariance_class

Alias for field number 1

python_covariance_class

Alias for field number 0

Bases: tuple

DomainLinks(python_domain_class, cpp_domain_class)

cpp_domain_class

Alias for field number 1

python_domain_class

Alias for field number 0

class moe.optimal_learning.python.linkers.LogLikelihoodMethod

Bases: tuple

LogLikelihoodMethod(log_likelihood_type, log_likelihood_class)

log_likelihood_class

Alias for field number 1

log_likelihood_type

Alias for field number 0

class moe.optimal_learning.python.linkers.OptimizerMethod

Bases: tuple

OptimizerMethod(optimizer_type, python_parameters_class, cpp_parameters_class, python_optimizer_class, cpp_optimizer_class)

cpp_optimizer_class

Alias for field number 4

cpp_parameters_class

Alias for field number 2

optimizer_type

Alias for field number 0

python_optimizer_class

Alias for field number 3

python_parameters_class

Alias for field number 1

moe.optimal_learning.python.repeated_domain module

RepeatedDomain class for handling manipulating sets of points in a (kernel) domain simultaneously.

class moe.optimal_learning.python.repeated_domain.RepeatedDomain(num_repeats, domain)[source]

Bases: moe.optimal_learning.python.interfaces.domain_interface.DomainInterface

A generic domain type for simultaneously manipulating num_repeats points in a “regular” domain (the kernel).

Note

Comments in this class are copied from RepeatedDomain in gpp_domain.hpp.

Note

the kernel domain is not copied. Instead, the kernel functions are called num_repeats times in a loop. In some cases, data reordering is also necessary to preserve the output properties (e.g., uniform distribution).

For some use cases (e.g., q,p-EI optimization with q > 1), we need to simultaneously manipulate several points within the same domain. To support this use case, we have the RepeatedDomain, a light-weight wrapper around any DomainInterface subclass that kernalizes that object’s functionality.

In general, kernel domain operations need be performed num_repeats times, once for each point. This class hides the looping logic so that use cases like various moe.optimal_learning.python.interfaces.optimization_interface.OptimizerInterface subclasses do not need to be explicitly aware of whether they are optimizing 1 point or 50 points. Instead, the OptimizableInterface implementation provides problem_size() and appropriately sized gradient information. Coupled with RepeatedDomain, Optimizers can remain oblivious.

In simpler terms, say we want to solve 5,0-EI in a parameter-space of dimension 3. So we would have 5 points moving around in a 3D space. The 3D space, whatever it is, is the kernel domain. We “repeat” the kernel 5 times; in practice this mostly amounts to simple loops around kernel functions and sometimes data reordering is also needed.

Note

this operation is more complex than just working in a higher dimensional space. 3 points in a 2D simplex is not the same as 1 point in a 6D simplex; e.g., [(0.5, 0.5), (0.5, 0.5), (0.5, 0.5)] is valid in the first scenario but not in the second.

Where the member domain takes kernel_input, this class’s members take an array with shape (num_repeats, ) + kernel_input.shape. Similarly kernel_output becomes an array with shape (num_repeats, ) + kernel_output.shape.

For example, check_point_inside() calls the kernel domain’s check_point_inside() function num_repeats times, returning True only if all num_repeats input points are inside the kernel domain.

check_point_inside(points)[source]

Check if a point is inside the domain/on its boundary or outside.

Parameters:point (array of float64 with shape (num_repeats, dim)) – point to check
Returns:true if point is inside the repeated domain
Return type:bool
compute_update_restricted_to_domain(max_relative_change, current_point, update_vector)[source]

Compute a new update so that CheckPointInside(current_point + return_value) is true.

Returns a new update vector in return_value so that:
point_new = point + return_value

has coordinates such that CheckPointInside(point_new) returns true. We select point_new by projecting point + update_vector to the nearest point on the domain.

return_value is a function of update_vector. return_value is just a copy of update_vector if current_point is already inside the domain.

Note

We modify update_vector (instead of returning point_new) so that further update limiting/testing may be performed.

Parameters:
  • max_relative_change (float64 in (0, 1]) – max change allowed per update (as a relative fraction of current distance to boundary)
  • current_point (array of float64 with shape (num_repeats, dim)) – starting point
  • update_vector (array of float64 with shape (num_repeats, dim)) – proposed update
Returns:

new update so that the final point remains inside the domain

Return type:

array of float64 with shape (num_repeats, dim)

dim[source]

Return the number of spatial dimensions of the kernel domain.

generate_random_point_in_domain(random_source=None)[source]

Generate point uniformly at random such that self.check_point_inside(point) is True.

Note

if you need multiple points, use generate_uniform_random_points_in_domain instead; depending on implementation, it may ield better distributions over many points. For example, tensor product type domains use latin hypercube sampling instead of repeated random draws which guarantees that no non-uniform clusters may arise (in subspaces) versus this method which treats all draws independently.

Returns:point in repeated domain
Return type:array of float64 with shape (num_repeats, dim)
generate_uniform_random_points_in_domain(num_points, random_source=None)[source]

Generate AT MOST num_points uniformly distributed points from the domain.

Unlike many of this class’s other member functions, generate_uniform_random_points_in_domain() is not as simple as calling the kernel’s member function num_repeats times. To obtain the same distribution, we have to additionally “transpose” (see implementation for details).

Note

The number of points returned may be LESS THAN num_points!

Implementations may use rejection sampling. In such cases, generating the requested number of points may be unreasonably slow, so implementers are allowed to generate fewer than num_points results.

Parameters:
  • num_points (integer >= 0) – max number of points to generate
  • random_source (callable yielding uniform random numbers in [0,1]) –
Returns:

uniform random sampling of points from the domain; may be fewer than num_points!

Return type:

array of float64 with shape (num_points_generated, num_repeats, dim)

get_bounding_box()[source]

Return a list of ClosedIntervals representing a bounding box for this domain.

get_constraint_list()[source]

Return a list of lambda functions expressing the domain bounds as linear constraints. Used by COBYLA.

Calls self._domain.get_constraint_list() for each repeat, writing the results sequentially. So output[0:2*dim] is from the first repeated domain, output[2*dim:4*dim] is from the second, etc.

Returns:a list of lambda functions corresponding to constraints
Return type:array of lambda functions with shape (num_repeats * dim * 2)

moe.optimal_learning.python.timing module

Simple context manager for logging timing information.

TODO(GH-299): Make this part of a more complete monitoring setup, flesh out timing tools. TODO(GH-299): Add a decorator for timing functions.

moe.optimal_learning.python.timing.timing_context(*args, **kwds)[source]

Context manager that logs the runtime of the body of the with-statement.

Uses time.clock() for measurement; not appropriate for fast-running code. Consider the timeit library for such situations.

Parameters:name (str) – name to log with this timing information

Module contents

The python component of the optimal_learning package, containing wrappers around C++ implementations of features and Python implementations of some of those features.

Files in this package

Major sub-packages

interfaces moe.optimal_learning.python.interfaces

A set of abstract base classes (ABCs) defining an interface for interacting with optimal_learning. These consist of composable functions and classes to build models, perform model selection, and design new experiments.

cpp_wrappers moe.optimal_learning.python.cpp_wrappers

An implementation of the ABCs in interfaces using wrappers around (fast) C++ calls. These routines are meant for “production” runs where high performance is a concern.

Note

the higher level C++ interfaces are generally not composable with objects not in the cpp_wrappers package. So it would be possible to implement moe.optimal_learning.python.interfaces.expected_improvement_interface.ExpectedImprovementInterface in Python and connect it to moe.optimal_learning.python.cpp_wrappers.gaussian_process.GaussianProcess, BUT it is not currently possible to connect moe.optimal_learning.python.cpp_wrappers.expected_improvement.ExpectedImprovement to moe.optimal_learning.python.python_version.gaussian_process.GaussianProcess.

python_version moe.optimal_learning.python.python_version

An implementation of the ABCs in interfaces using Python (with numpy/scipy). These routines are more for educational and experimental purposes. Python is generally simpler than C++ so the hope is that this package is more accessible to new users hoping to learn about optimal_learning. Additionally, development time in Python is shorter, so it could be convenient to test new ideas here before fully implementing them in C++. For example, developers could test a new moe.optimal_learning.python.interfaces.optimization_interface.OptimizerInterface implementation in Python while connecting it to C++ evaluation of objective functions.