cross_entropy 1.0.0
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +7 -0
- data/README.md +99 -0
- data/lib/cross_entropy.rb +10 -0
- data/lib/cross_entropy/abstract_problem.rb +102 -0
- data/lib/cross_entropy/beta_problem.rb +75 -0
- data/lib/cross_entropy/continuous_problem.rb +44 -0
- data/lib/cross_entropy/matrix_problem.rb +79 -0
- data/lib/cross_entropy/narray_extensions.rb +230 -0
- data/lib/cross_entropy/version.rb +6 -0
- data/test/cross_entropy/beta_problem_test.rb +47 -0
- data/test/cross_entropy/continuous_problem_test.rb +78 -0
- data/test/cross_entropy/cross_entropy_test.rb +149 -0
- metadata +92 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: 04a027fd6b1ff464e5845bd8a85657fda3cef628
|
4
|
+
data.tar.gz: 1e542c3df5f6a36d2f13c564c3dfcb46d5e2d69f
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: 040644e34aed5dbe019789af16ba95226ac4e9c09e8be119d76aff5cf5d30946b286cfe0cdd7c700356217389c4b90060fcb564856e3f618a20fff6830e7fca6
|
7
|
+
data.tar.gz: a57a8845f9e8e3e56bc1c91ccda84bb6961d861fb69ef7f1c8c1b4518e3561b7fcdeb8ff410a19905b4971eba4f9a84a1fcc01d66f8d62896b7e55cca2a11e16
|
data/README.md
ADDED
@@ -0,0 +1,99 @@
|
|
1
|
+
# cross_entropy
|
2
|
+
|
3
|
+
[![Build Status](https://travis-ci.org/jdleesmiller/cross_entropy.svg?branch=master)](https://travis-ci.org/jdleesmiller/cross_entropy)
|
4
|
+
|
5
|
+
https://github.com/jdleesmiller/cross_entropy
|
6
|
+
|
7
|
+
## SYNOPSIS
|
8
|
+
|
9
|
+
Implementations of the [Cross Entropy Method](https://en.wikipedia.org/wiki/Cross-entropy_method) for several types of problems. Uses [NArray](http://masa16.github.io/narray/) for the numerics, to achieve reasonable performance.
|
10
|
+
|
11
|
+
### What is the Cross Entropy method?
|
12
|
+
|
13
|
+
It's basically like a [genetic algorithm](https://en.wikipedia.org/wiki/Genetic_algorithm) without the biological stuff. Instead, it works on nice, pure probability distributions. You start by specifying a probability distribution for the optimal values, based on your initial guess. The CEM then
|
14
|
+
- generates samples based on that distribution,
|
15
|
+
- scores them according to the objective function, and
|
16
|
+
- uses the highest-scoring samples to update the parameters of the probability distribution, so it converges on an optimal value.
|
17
|
+
|
18
|
+
It has relatively few tunable parameters, and it automatically balances diversification and intensification. It is robust to noise in the objective function, so it is very useful for parameter tuning and simulation work.
|
19
|
+
|
20
|
+
### Supported problem types
|
21
|
+
|
22
|
+
- MatrixProblem: For discrete optimisation problems. Each variable can take one of a fixed number of states. The sampling distribution is a defined by a probability mass function for each variable. The term "matrix problem" is based on the idea that we can write the PMFs for each variable into the rows (NArray dimension 1) of a matrix. For example:
|
23
|
+
```
|
24
|
+
value 1 | value 2
|
25
|
+
variable 1 0.3 | 0.7
|
26
|
+
variable 2 0.9 | 0.1
|
27
|
+
```
|
28
|
+
|
29
|
+
- ContinuousProblem: For continuous unbounded problems. The sampling
|
30
|
+
distribution is a univariate Gaussian.
|
31
|
+
|
32
|
+
- BetaProblem: For continous bounded problems. The sampling distribution is a
|
33
|
+
Beta distribution.
|
34
|
+
|
35
|
+
### Usage
|
36
|
+
|
37
|
+
For example, here is the [Rosenbrock banana function](http://en.wikipedia.org/wiki/Rosenbrock_function) and a custom smooth updater. The function has a global minimum at `(a, a^2)`, but it's hard to find.
|
38
|
+
```{ruby}
|
39
|
+
# Parameters for the "banana" objective function.
|
40
|
+
a = 1.0
|
41
|
+
b = 100.0
|
42
|
+
|
43
|
+
# Our initial guess at the optimal solution.
|
44
|
+
# This is just a guess, so we give it a large standard deviation.
|
45
|
+
mean = NArray[0.0, 0.0]
|
46
|
+
stddev = NArray[10.0, 10.0]
|
47
|
+
|
48
|
+
# Set up the problem. These are the CEM parameters.
|
49
|
+
problem = CrossEntropy::ContinuousProblem.new(mean, stddev)
|
50
|
+
problem.num_samples = 1000
|
51
|
+
problem.num_elite = 10
|
52
|
+
problem.max_iters = 300
|
53
|
+
smooth = 0.1
|
54
|
+
|
55
|
+
# Objective function.
|
56
|
+
problem.to_score_sample {|x| (a - x[0])**2 + b*(x[1] - x[0]**2)**2 }
|
57
|
+
|
58
|
+
# Do some smoothing when updating the parameters based on new samples.
|
59
|
+
# This isn't strictly required, but I find it often helps convergence.
|
60
|
+
problem.to_update {|new_mean, new_stddev|
|
61
|
+
smooth_mean = smooth*new_mean + (1 - smooth)*problem.param_mean
|
62
|
+
smooth_stddev = smooth*new_stddev + (1 - smooth)*problem.param_stddev
|
63
|
+
[smooth_mean, smooth_stddev]
|
64
|
+
}
|
65
|
+
|
66
|
+
# It's all calculation from now on...
|
67
|
+
problem.solve
|
68
|
+
# problems.param_mean => NArray[1.0, 1.0]
|
69
|
+
```
|
70
|
+
|
71
|
+
## INSTALLATION
|
72
|
+
|
73
|
+
gem install cross_entropy
|
74
|
+
|
75
|
+
## LICENSE
|
76
|
+
|
77
|
+
(The MIT License)
|
78
|
+
|
79
|
+
Copyright (c) 2015 John Lees-Miller
|
80
|
+
|
81
|
+
Permission is hereby granted, free of charge, to any person obtaining
|
82
|
+
a copy of this software and associated documentation files (the
|
83
|
+
'Software'), to deal in the Software without restriction, including
|
84
|
+
without limitation the rights to use, copy, modify, merge, publish,
|
85
|
+
distribute, sublicense, and/or sell copies of the Software, and to
|
86
|
+
permit persons to whom the Software is furnished to do so, subject to
|
87
|
+
the following conditions:
|
88
|
+
|
89
|
+
The above copyright notice and this permission notice shall be
|
90
|
+
included in all copies or substantial portions of the Software.
|
91
|
+
|
92
|
+
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND,
|
93
|
+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
94
|
+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
95
|
+
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
96
|
+
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
97
|
+
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
98
|
+
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
99
|
+
|
@@ -0,0 +1,10 @@
|
|
1
|
+
require 'cross_entropy/version'
|
2
|
+
|
3
|
+
require 'narray'
|
4
|
+
|
5
|
+
require 'cross_entropy/narray_extensions'
|
6
|
+
require 'cross_entropy/abstract_problem'
|
7
|
+
require 'cross_entropy/matrix_problem'
|
8
|
+
require 'cross_entropy/continuous_problem'
|
9
|
+
require 'cross_entropy/beta_problem'
|
10
|
+
|
@@ -0,0 +1,102 @@
|
|
1
|
+
module CrossEntropy
|
2
|
+
#
|
3
|
+
# Base class for specific problem types.
|
4
|
+
#
|
5
|
+
class AbstractProblem
|
6
|
+
#
|
7
|
+
# @param [Array] params
|
8
|
+
#
|
9
|
+
def initialize params
|
10
|
+
@params = params
|
11
|
+
|
12
|
+
@max_iters = nil
|
13
|
+
@track_overall_min = false
|
14
|
+
@overall_min_score = 1.0 / 0.0
|
15
|
+
@overall_min_score_sample = nil
|
16
|
+
|
17
|
+
@generate_samples = proc { raise "no generating function provided" }
|
18
|
+
@score_sample = proc {|sample| raise "no score function provided" }
|
19
|
+
@estimate = proc {|elite| raise "no estimate function provided" }
|
20
|
+
@update = proc {|estimated_params| estimated_params }
|
21
|
+
@stop_decision = proc {
|
22
|
+
raise "no max_iters provided" unless self.max_iters
|
23
|
+
self.num_iters >= self.max_iters
|
24
|
+
}
|
25
|
+
|
26
|
+
yield(self) if block_given?
|
27
|
+
end
|
28
|
+
|
29
|
+
attr_accessor :params
|
30
|
+
|
31
|
+
attr_accessor :num_samples
|
32
|
+
attr_accessor :num_elite
|
33
|
+
attr_accessor :max_iters
|
34
|
+
|
35
|
+
def to_generate_samples █ @generate_samples = block end
|
36
|
+
|
37
|
+
def to_score_sample █ @score_sample = block end
|
38
|
+
|
39
|
+
def to_estimate █ @estimate = block end
|
40
|
+
|
41
|
+
def to_update █ @update = block end
|
42
|
+
|
43
|
+
def for_stop_decision █ @stop_decision = block end
|
44
|
+
|
45
|
+
attr_reader :num_iters
|
46
|
+
attr_reader :min_score
|
47
|
+
attr_reader :elite_score
|
48
|
+
|
49
|
+
# Keep track of the best sample we've ever seen; if the scoring function is
|
50
|
+
# deterministic, then this is a quantity of major interest.
|
51
|
+
attr_reader :overall_min_score
|
52
|
+
attr_reader :overall_min_score_sample
|
53
|
+
attr_accessor :track_overall_min
|
54
|
+
|
55
|
+
#
|
56
|
+
# Generic cross entropy routine.
|
57
|
+
#
|
58
|
+
def solve
|
59
|
+
@num_iters = 0
|
60
|
+
|
61
|
+
begin
|
62
|
+
@min_score = nil
|
63
|
+
@elite_score = nil
|
64
|
+
|
65
|
+
samples = @generate_samples.call
|
66
|
+
|
67
|
+
# Score each sample.
|
68
|
+
scores = NArray.float(self.num_samples)
|
69
|
+
for i in 0...self.num_samples
|
70
|
+
sample_i = samples[i,true]
|
71
|
+
score_i = @score_sample.call(sample_i)
|
72
|
+
|
73
|
+
# Keep track of best ever if requested.
|
74
|
+
if track_overall_min && score_i < overall_min_score
|
75
|
+
@overall_min_score = score_i
|
76
|
+
@overall_min_score_sample = sample_i
|
77
|
+
end
|
78
|
+
|
79
|
+
scores[i] = score_i
|
80
|
+
end
|
81
|
+
|
82
|
+
# Find elite quantile (gamma).
|
83
|
+
scores_sorted = scores.sort
|
84
|
+
@min_score = scores_sorted[0]
|
85
|
+
@elite_score = scores_sorted[self.num_elite-1]
|
86
|
+
|
87
|
+
# Take all samples with scores below (or equal to) gamma; note that
|
88
|
+
# there may be more than num_elite, due to ties.
|
89
|
+
elite = samples[(scores <= elite_score).where, true]
|
90
|
+
|
91
|
+
# Compute new parameter estimates.
|
92
|
+
estimated_params = @estimate.call(elite)
|
93
|
+
|
94
|
+
# Update main parameter estimates.
|
95
|
+
self.params = @update.call(estimated_params)
|
96
|
+
|
97
|
+
@num_iters += 1
|
98
|
+
end until @stop_decision.call
|
99
|
+
end
|
100
|
+
end
|
101
|
+
end
|
102
|
+
|
@@ -0,0 +1,75 @@
|
|
1
|
+
module CrossEntropy
|
2
|
+
#
|
3
|
+
# Solve a continuous optimisation problem in which the variables are bounded
|
4
|
+
# to the unit interval, [0, 1]. The sampling distribution of each parameter
|
5
|
+
# is assumed to be a Beta distribution with parameters alpha and
|
6
|
+
# beta.
|
7
|
+
#
|
8
|
+
class BetaProblem < AbstractProblem
|
9
|
+
include NMath
|
10
|
+
|
11
|
+
def initialize alpha, beta
|
12
|
+
super [alpha, beta]
|
13
|
+
|
14
|
+
@generate_samples = proc { self.generate_beta_samples }
|
15
|
+
@estimate = proc {|elite| self.estimate_mom(elite) }
|
16
|
+
|
17
|
+
yield(self) if block_given?
|
18
|
+
end
|
19
|
+
|
20
|
+
def param_alpha; params[0] end
|
21
|
+
def param_beta; params[1] end
|
22
|
+
|
23
|
+
#
|
24
|
+
# Generate samples.
|
25
|
+
#
|
26
|
+
def generate_beta_samples
|
27
|
+
NArray[*param_alpha.to_a.zip(param_beta.to_a).map {|alpha, beta|
|
28
|
+
generate_beta_sample(alpha, beta)
|
29
|
+
}]
|
30
|
+
end
|
31
|
+
|
32
|
+
#
|
33
|
+
# Method of moments estimate using only the given 'elite' solutions.
|
34
|
+
#
|
35
|
+
# Maximum likelihood estimates for the parameters of the beta distribution
|
36
|
+
# are difficult to compute, so we use the method of moments instead; see
|
37
|
+
# http://www.itl.nist.gov/div898/handbook/eda/section3/eda366h.htm
|
38
|
+
# for more information.
|
39
|
+
#
|
40
|
+
# @param [NArray] elite elite samples; dimension 0 is the sample index; the
|
41
|
+
# remaining dimensions contain the samples
|
42
|
+
#
|
43
|
+
# @return [Array] the estimated parameter arrays
|
44
|
+
#
|
45
|
+
def estimate_mom elite
|
46
|
+
mean = elite.mean(0)
|
47
|
+
variance = elite.stddev(0)**2
|
48
|
+
|
49
|
+
q = mean * (1.0 - mean)
|
50
|
+
valid = 0 < variance && variance < q
|
51
|
+
r = q[valid] / variance[valid] - 1
|
52
|
+
|
53
|
+
alpha = NArray[*param_alpha.map(&:to_f)]
|
54
|
+
alpha[valid] = mean[valid] * r
|
55
|
+
|
56
|
+
beta = NArray[*param_beta.map(&:to_f)]
|
57
|
+
beta[valid] = (1.0 - mean[valid]) * r
|
58
|
+
|
59
|
+
[alpha, beta]
|
60
|
+
end
|
61
|
+
|
62
|
+
private
|
63
|
+
|
64
|
+
def generate_erlang_samples k
|
65
|
+
-log(NArray.float(k, num_samples).random).sum(0)
|
66
|
+
end
|
67
|
+
|
68
|
+
def generate_beta_sample alpha, beta
|
69
|
+
a = generate_erlang_samples(alpha)
|
70
|
+
b = generate_erlang_samples(beta)
|
71
|
+
a / (a + b)
|
72
|
+
end
|
73
|
+
end
|
74
|
+
end
|
75
|
+
|
@@ -0,0 +1,44 @@
|
|
1
|
+
module CrossEntropy
|
2
|
+
#
|
3
|
+
# Solve a continuous optimisation problem. The sampling distribution of each
|
4
|
+
# parameter is assumed to be a 1D Gaussian with given mean and variance.
|
5
|
+
#
|
6
|
+
class ContinuousProblem < AbstractProblem
|
7
|
+
def initialize mean, stddev
|
8
|
+
super [mean, stddev]
|
9
|
+
|
10
|
+
@generate_samples = proc { self.generate_gaussian_samples }
|
11
|
+
@estimate = proc {|elite| self.estimate_ml(elite) }
|
12
|
+
|
13
|
+
yield(self) if block_given?
|
14
|
+
end
|
15
|
+
|
16
|
+
def param_mean; params[0] end
|
17
|
+
def param_stddev; params[1] end
|
18
|
+
|
19
|
+
def sample_shape; param_mean.shape end
|
20
|
+
|
21
|
+
#
|
22
|
+
# Generate samples.
|
23
|
+
#
|
24
|
+
def generate_gaussian_samples
|
25
|
+
r = NArray.float(num_samples, *sample_shape).randomn
|
26
|
+
mean = param_mean.reshape(1, *sample_shape)
|
27
|
+
stddev = param_stddev.reshape(1, *sample_shape)
|
28
|
+
mean + stddev * r
|
29
|
+
end
|
30
|
+
|
31
|
+
#
|
32
|
+
# Maximum likelihood estimate using only the given 'elite' solutions.
|
33
|
+
#
|
34
|
+
# @param [NArray] elite elite samples; dimension 0 is the sample index; the
|
35
|
+
# remaining dimensions contain the samples
|
36
|
+
#
|
37
|
+
# @return [Array] the estimated parameter arrays
|
38
|
+
#
|
39
|
+
def estimate_ml elite
|
40
|
+
[elite.mean(0), elite.stddev(0)]
|
41
|
+
end
|
42
|
+
end
|
43
|
+
end
|
44
|
+
|
@@ -0,0 +1,79 @@
|
|
1
|
+
module CrossEntropy
|
2
|
+
#
|
3
|
+
# Assuming that the data are probabilities in an NArray (say dim 1 or dim 2
|
4
|
+
# for now). Rows (NArray dimension 1) must sum to one. Columns (NArray
|
5
|
+
# dimension 0) represent the quantities to be optimized.
|
6
|
+
#
|
7
|
+
# Caller should set seed with NArray.srand before calling.
|
8
|
+
#
|
9
|
+
class MatrixProblem < AbstractProblem
|
10
|
+
using NArrayExtensions
|
11
|
+
|
12
|
+
def initialize(params = nil)
|
13
|
+
super(params)
|
14
|
+
|
15
|
+
# Configurable procs.
|
16
|
+
@generate_samples = proc { self.generate_samples_directly }
|
17
|
+
@estimate = proc {|elite| self.estimate_ml(elite) }
|
18
|
+
@update = proc {|pr_est| pr_est }
|
19
|
+
end
|
20
|
+
|
21
|
+
def num_variables; @params.shape[1] end
|
22
|
+
def num_values; @params.shape[0] end
|
23
|
+
|
24
|
+
#
|
25
|
+
# Generate samples directly from the probabilities matrix {#pr}.
|
26
|
+
#
|
27
|
+
# If your problem is tightly constrained, you may want to provide a custom
|
28
|
+
# sample generation routine that avoids infeasible solutions; see
|
29
|
+
# {#to_generate_samples}.
|
30
|
+
#
|
31
|
+
def generate_samples_directly
|
32
|
+
self.params.tile(1,1,self.num_samples).sample_pmf_dim.transpose(1,0)
|
33
|
+
end
|
34
|
+
|
35
|
+
#
|
36
|
+
# Maximum likelihood estimate using only the given 'elite' solutions.
|
37
|
+
#
|
38
|
+
# This is often (but not always) the optimal estimate for the probabilities
|
39
|
+
# from the elite samples for problems of this form.
|
40
|
+
#
|
41
|
+
# @param [NArray] elite {#num_variables} rows; the number of columns depends
|
42
|
+
# on the {#num_elite} parameter, but is typically less than {#num_samples};
|
43
|
+
# elements are integer in [0, {#num_values})
|
44
|
+
#
|
45
|
+
# @return [NArray] {#num_variables} rows; {#num_values} columns; entries are
|
46
|
+
# non-negative floats in [0,1] and sum to 1
|
47
|
+
#
|
48
|
+
def estimate_ml elite
|
49
|
+
pr_est = NArray.float(self.num_values, self.num_variables)
|
50
|
+
for i in 0...num_variables
|
51
|
+
elite_i = elite[true,i]
|
52
|
+
for j in 0...num_values
|
53
|
+
pr_est[j,i] = elite_i.eq(j).count_true
|
54
|
+
end
|
55
|
+
end
|
56
|
+
pr_est /= elite.shape[0]
|
57
|
+
pr_est
|
58
|
+
end
|
59
|
+
|
60
|
+
#
|
61
|
+
# Find most likely solution so far based on given probabilities.
|
62
|
+
#
|
63
|
+
# @param [NArray] pr probability matrix with {#num_variables} rows and
|
64
|
+
# {#num_values} columns; if not specified, the current {#pr} matrix is used
|
65
|
+
#
|
66
|
+
# @return [Narray] column vector with {#num_variables} integer entries in
|
67
|
+
# [0, {#num_values})
|
68
|
+
#
|
69
|
+
def most_likely_solution pr=self.params
|
70
|
+
pr_eq = pr.eq(pr.max(0).tile(1,pr.shape[0]).transpose(1,0))
|
71
|
+
pr_ml = NArray.int(pr_eq.shape[1])
|
72
|
+
for i in 0...pr_eq.shape[1]
|
73
|
+
pr_ml[i] = pr_eq[true,i].where[0]
|
74
|
+
end
|
75
|
+
pr_ml
|
76
|
+
end
|
77
|
+
end
|
78
|
+
end
|
79
|
+
|
@@ -0,0 +1,230 @@
|
|
1
|
+
module CrossEntropy
|
2
|
+
#
|
3
|
+
# Some extensions to NArray.
|
4
|
+
#
|
5
|
+
# Note that I've opened a pull request for general cumsum and tile, but it's
|
6
|
+
# still open without comment after three years, so maybe they don't like them.
|
7
|
+
# https://github.com/masa16/narray/pull/7
|
8
|
+
#
|
9
|
+
module NArrayExtensions
|
10
|
+
refine NArray do
|
11
|
+
#
|
12
|
+
# Cumulative sum along dimension +dim+; modifies this array in place.
|
13
|
+
#
|
14
|
+
# @param [Number] dim non-negative
|
15
|
+
#
|
16
|
+
# @return [NArray] self
|
17
|
+
#
|
18
|
+
def cumsum_general! dim=0
|
19
|
+
if self.dim > dim
|
20
|
+
if self.dim == 1
|
21
|
+
# use the built-in version for dimension 1
|
22
|
+
self.cumsum_1!
|
23
|
+
else
|
24
|
+
# for example, if this is a matrix and dim = 0, mask_0 selects the
|
25
|
+
# first column of the matrix and mask_1 selects the second column;
|
26
|
+
# then we just shuffle them along and accumulate.
|
27
|
+
mask_0 = (0...self.dim).map{|d| d == dim ? 0 : true}
|
28
|
+
mask_1 = (0...self.dim).map{|d| d == dim ? 1 : true}
|
29
|
+
while mask_1[dim] < self.shape[dim]
|
30
|
+
self[*mask_1] += self[*mask_0]
|
31
|
+
mask_0[dim] += 1
|
32
|
+
mask_1[dim] += 1
|
33
|
+
end
|
34
|
+
end
|
35
|
+
end
|
36
|
+
self
|
37
|
+
end
|
38
|
+
|
39
|
+
#
|
40
|
+
# Cumulative sum along dimension +dim+.
|
41
|
+
#
|
42
|
+
# @param [Number] dim non-negative
|
43
|
+
#
|
44
|
+
# @return [NArray]
|
45
|
+
#
|
46
|
+
def cumsum_general dim=0
|
47
|
+
self.dup.cumsum_general!(dim)
|
48
|
+
end
|
49
|
+
|
50
|
+
# The built-in cumsum only does vectors (dim 1).
|
51
|
+
alias cumsum_1 cumsum
|
52
|
+
alias cumsum cumsum_general
|
53
|
+
alias cumsum_1! cumsum!
|
54
|
+
alias cumsum! cumsum_general!
|
55
|
+
|
56
|
+
#
|
57
|
+
# Replicate this array to make a tiled array; this is the matlab function
|
58
|
+
# repmat.
|
59
|
+
#
|
60
|
+
# @param [Array<Number>] reps number of times to repeat in each dimension;
|
61
|
+
# note that reps.size is allowed to be different from self.dim, and
|
62
|
+
# dimensions of size 1 will be added to compensate
|
63
|
+
#
|
64
|
+
# @return [NArray] with same typecode as self
|
65
|
+
#
|
66
|
+
def tile *reps
|
67
|
+
if self.dim == 0 || reps.member?(0)
|
68
|
+
# Degenerate case: 0 dimensions or dimension 0
|
69
|
+
res = NArray.new(self.typecode, 0)
|
70
|
+
else
|
71
|
+
if reps.size <= self.dim
|
72
|
+
# Repeat any extra dims once.
|
73
|
+
reps = reps + [1]*(self.dim - reps.size)
|
74
|
+
tile = self
|
75
|
+
else
|
76
|
+
# Have to add some more dimensions (with implicit shape[dim] = 1).
|
77
|
+
tile_shape = self.shape + [1]*(reps.size - self.dim)
|
78
|
+
tile = self.reshape(*tile_shape)
|
79
|
+
end
|
80
|
+
|
81
|
+
# Allocate tiled matrix.
|
82
|
+
res_shape = (0...tile.dim).map{|i| tile.shape[i] * reps[i]}
|
83
|
+
res = NArray.new(self.typecode, *res_shape)
|
84
|
+
|
85
|
+
# Copy tiles.
|
86
|
+
# This probably isn't the most efficient way of doing this; just doing
|
87
|
+
# res[] = tile doesn't seem to work in general
|
88
|
+
nested_for_zero_to(reps) do |tile_pos|
|
89
|
+
tile_slice = (0...tile.dim).map{|i|
|
90
|
+
(tile.shape[i] * tile_pos[i])...(tile.shape[i] * (tile_pos[i]+1))}
|
91
|
+
res[*tile_slice] = tile
|
92
|
+
end
|
93
|
+
end
|
94
|
+
res
|
95
|
+
end
|
96
|
+
|
97
|
+
#
|
98
|
+
# Convert a linear (1D) index into subscripts for an array with the given
|
99
|
+
# shape; this is the matlab function ind2sub.
|
100
|
+
#
|
101
|
+
# (TODO: There must be a function in NArray to do this, but I can't find
|
102
|
+
# it.)
|
103
|
+
#
|
104
|
+
# @param [Integer] index non-negative
|
105
|
+
#
|
106
|
+
# @return [Array<Integer>] subscript corresponding to the given linear
|
107
|
+
# index; this is the same size as +shape+
|
108
|
+
#
|
109
|
+
def index_to_subscript index
|
110
|
+
raise IndexError.new("out of bounds: index=#{index} for shape=#{
|
111
|
+
self.shape.inspect}") if index >= self.size
|
112
|
+
|
113
|
+
self.shape.map {|s| index, r = index.divmod(s); r }
|
114
|
+
end
|
115
|
+
|
116
|
+
#
|
117
|
+
# Sample from an array that represents an empirical probability mass
|
118
|
+
# function (pmf). It is assumed that this is an array of probabilities,
|
119
|
+
# and that the sum over the whole array is one (up to rounding error). An
|
120
|
+
# index into the array is chosen in proportion to its probability.
|
121
|
+
#
|
122
|
+
# @example select a subscript uniform-randomly
|
123
|
+
# NArray.float(3,3,3).fill!(1).div!(3*3*3).sample_pmf #=> [2, 2, 0]
|
124
|
+
#
|
125
|
+
# @param [NArray] r if you have already generated the random sample, you
|
126
|
+
# can pass it in here; if nil, a random sample will be generated;
|
127
|
+
# this is used for testing; must be have shape <tt>[1]</tt> if
|
128
|
+
# specified
|
129
|
+
#
|
130
|
+
# @return [Array<Integer>] subscripts of a randomly selected into the
|
131
|
+
# array; this is the same size as +shape+
|
132
|
+
#
|
133
|
+
def sample_pmf r=nil
|
134
|
+
self.index_to_subscript(self.flatten.sample_pmf_dim(0, r))
|
135
|
+
end
|
136
|
+
|
137
|
+
#
|
138
|
+
# Sample from an array in which the given dimension, +dim+, represents an
|
139
|
+
# empirical probability mass function (pmf). It is assumed that the
|
140
|
+
# entries along +dim+ are probabilities that sum to one (up to rounding
|
141
|
+
# error).
|
142
|
+
#
|
143
|
+
# @example a matrix in which dim 0 sums to 1
|
144
|
+
# NArray[[0.1,0.2,0.7],
|
145
|
+
# [0.3,0.5,0.2],
|
146
|
+
# [0.0,0.2,0.8],
|
147
|
+
# [0.7,0.1,0.2]].sample_pmf(1)
|
148
|
+
# #=> NArray.int(2) [ 1, 1, 2, 0 ] # random indices into dimension 1
|
149
|
+
#
|
150
|
+
# @param [Integer] dim dimension to sample along
|
151
|
+
#
|
152
|
+
# @param [NArray] r if you have already generated the random sample, you
|
153
|
+
# can pass it in here; if nil, a random sample will be generated;
|
154
|
+
# this is used for testing; see also sample_cdf_dim
|
155
|
+
#
|
156
|
+
# @return [NArray] integer subscripts
|
157
|
+
#
|
158
|
+
def sample_pmf_dim dim=0, r=nil
|
159
|
+
self.cumsum(dim).sample_cdf_dim(dim, r)
|
160
|
+
end
|
161
|
+
|
162
|
+
#
|
163
|
+
# Sample from an array in which the given dimension, +dim+, represents an
|
164
|
+
# empirical cumulative distribution function (cdf). It is assumed that the
|
165
|
+
# entries along +dim+ are sums of probabilities, and that the last entry
|
166
|
+
# along dim should be 1 (up to rounding error)
|
167
|
+
#
|
168
|
+
# @param [Integer] dim dimension to sample along
|
169
|
+
#
|
170
|
+
# @param [NArray] r if you have already generated the random sample, you
|
171
|
+
# can pass it in here; if nil, a random sample will be generated;
|
172
|
+
# this is used for testing; see also sample_cdf_dim
|
173
|
+
#
|
174
|
+
# @return [NArray] integer subscripts
|
175
|
+
#
|
176
|
+
def sample_cdf_dim dim=0, r=nil
|
177
|
+
raise 'self.dim must be > dim' unless self.dim > dim
|
178
|
+
|
179
|
+
# generate random sample, unless one was given for testing
|
180
|
+
r_shape = (0...self.dim).map {|i| i == dim ? 1 : self.shape[i]}
|
181
|
+
r = NArray.new(self.typecode, *r_shape).random! unless r
|
182
|
+
|
183
|
+
# allocate space for results -- same size as the random sample
|
184
|
+
res = NArray.int(*r_shape)
|
185
|
+
|
186
|
+
# for every other dimension, look for the first element that is over the
|
187
|
+
# threshold
|
188
|
+
nested_for_zero_to(r_shape) do |slice|
|
189
|
+
r_thresh = r[*slice]
|
190
|
+
res[*slice] = self.shape[dim] - 1 # default to last
|
191
|
+
self_slice = slice.dup
|
192
|
+
for self_slice[dim] in 0...self.shape[dim]
|
193
|
+
if r_thresh < self[*self_slice]
|
194
|
+
res[*slice] = self_slice[dim]
|
195
|
+
break
|
196
|
+
end
|
197
|
+
end
|
198
|
+
end
|
199
|
+
|
200
|
+
res[*(0...self.dim).map {|i| i == dim ? 0 : true}]
|
201
|
+
end
|
202
|
+
|
203
|
+
private
|
204
|
+
|
205
|
+
#
|
206
|
+
# This is effectively <tt>suprema.size</tt> nested 'for' loops, in which
|
207
|
+
# the outermost loop runs over <tt>0...suprema.first</tt>, and the
|
208
|
+
# innermost loop runs over <tt>0...suprema.last</tt>.
|
209
|
+
#
|
210
|
+
# For example, when +suprema+ is [3], it yields [0], [1] and [2], and when
|
211
|
+
# +suprema+ is [3,2] it yields [0,0], [0,1], [1,0], [1,1], [2,0] and
|
212
|
+
# [2,1].
|
213
|
+
#
|
214
|
+
# @param [Array<Integer>] suprema non-negative entries; does not yield if
|
215
|
+
# empty
|
216
|
+
#
|
217
|
+
# @return [nil]
|
218
|
+
#
|
219
|
+
def nested_for_zero_to suprema
|
220
|
+
unless suprema.empty?
|
221
|
+
nums = suprema.map{|n| (0...n).to_a}
|
222
|
+
nums.first.product(*nums.drop(1)).each do |num|
|
223
|
+
yield num
|
224
|
+
end
|
225
|
+
end
|
226
|
+
nil
|
227
|
+
end
|
228
|
+
end
|
229
|
+
end
|
230
|
+
end
|
@@ -0,0 +1,47 @@
|
|
1
|
+
require 'cross_entropy'
|
2
|
+
require 'minitest/autorun'
|
3
|
+
|
4
|
+
class TestBetaProblem < MiniTest::Test
|
5
|
+
# tolerance for numerical comparisons
|
6
|
+
DELTA = 1e-3
|
7
|
+
|
8
|
+
def assert_narray_close exp, obs
|
9
|
+
assert exp.shape == obs.shape && ((exp - obs).abs < DELTA).all?,
|
10
|
+
"#{exp.inspect} expected; got\n#{obs.inspect}"
|
11
|
+
end
|
12
|
+
|
13
|
+
#
|
14
|
+
# See http://en.wikipedia.org/wiki/Rosenbrock_function
|
15
|
+
#
|
16
|
+
# The function has a global minimum at $(a, a^2)$, but it's hard to find.
|
17
|
+
#
|
18
|
+
def test_rosenbrock_banana
|
19
|
+
NArray.srand(567) # must use NArray's generator, not Ruby's
|
20
|
+
|
21
|
+
a = 0.5
|
22
|
+
b = 100.0
|
23
|
+
smooth = 0.1
|
24
|
+
|
25
|
+
alpha = NArray[1.0, 1.0]
|
26
|
+
beta = NArray[1.0, 1.0]
|
27
|
+
|
28
|
+
problem = CrossEntropy::BetaProblem.new(alpha, beta)
|
29
|
+
problem.num_samples = 1000
|
30
|
+
problem.num_elite = 10
|
31
|
+
problem.max_iters = 10
|
32
|
+
|
33
|
+
problem.to_score_sample {|x| (a - x[0])**2 + b*(x[1] - x[0]**2)**2 }
|
34
|
+
|
35
|
+
problem.to_update {|new_alpha, new_beta|
|
36
|
+
smooth_alpha = smooth*new_alpha + (1 - smooth)*problem.param_alpha
|
37
|
+
smooth_beta = smooth*new_beta + (1 - smooth)*problem.param_beta
|
38
|
+
[smooth_alpha, smooth_beta]
|
39
|
+
}
|
40
|
+
|
41
|
+
problem.solve
|
42
|
+
|
43
|
+
estimates = problem.param_alpha / (problem.param_alpha + problem.param_beta)
|
44
|
+
assert_narray_close NArray[0.5, 0.25], estimates
|
45
|
+
assert problem.num_iters <= problem.max_iters
|
46
|
+
end
|
47
|
+
end
|
@@ -0,0 +1,78 @@
|
|
1
|
+
require 'cross_entropy'
|
2
|
+
require 'minitest/autorun'
|
3
|
+
|
4
|
+
class TestContinuousProblem < MiniTest::Test
|
5
|
+
# tolerance for numerical comparisons
|
6
|
+
DELTA = 1e-6
|
7
|
+
|
8
|
+
include NMath
|
9
|
+
|
10
|
+
def assert_narray_close exp, obs
|
11
|
+
assert exp.shape == obs.shape && ((exp - obs).abs < DELTA).all?,
|
12
|
+
"#{exp.inspect} expected; got\n#{obs.inspect}"
|
13
|
+
end
|
14
|
+
|
15
|
+
#
|
16
|
+
# Example 3.1 from Kroese et al. 2006.
|
17
|
+
#
|
18
|
+
# Maximise $e^{-(x-2)^2} + 0.8 e^{−(x+2)^2}$ for real $x$. The function has a
|
19
|
+
# global maximum at x = 2 and a local maximum at x = -2, which we should
|
20
|
+
# avoid.
|
21
|
+
#
|
22
|
+
# (This is also the example on Wikipedia.)
|
23
|
+
#
|
24
|
+
def test_Kroese_3_1
|
25
|
+
NArray.srand(567) # must use NArray's generator, not Ruby's
|
26
|
+
|
27
|
+
mean = NArray[0.0]
|
28
|
+
stddev = NArray[10.0]
|
29
|
+
|
30
|
+
problem = CrossEntropy::ContinuousProblem.new(mean, stddev)
|
31
|
+
problem.num_samples = 100
|
32
|
+
problem.num_elite = 10
|
33
|
+
problem.max_iters = 100
|
34
|
+
|
35
|
+
# NB: maximising
|
36
|
+
problem.to_score_sample {|x| -(exp(-(x-2)**2) + 0.8 * exp(-(x+2)**2)) }
|
37
|
+
|
38
|
+
problem.solve
|
39
|
+
|
40
|
+
assert_narray_close NArray[2.0], problem.param_mean
|
41
|
+
assert problem.num_iters <= problem.max_iters
|
42
|
+
end
|
43
|
+
|
44
|
+
#
|
45
|
+
# See http://en.wikipedia.org/wiki/Rosenbrock_function
|
46
|
+
#
|
47
|
+
# The function has a global minimum at $(a, a^2)$, but it's hard to find.
|
48
|
+
#
|
49
|
+
def test_rosenbrock_banana
|
50
|
+
NArray.srand(567) # must use NArray's generator, not Ruby's
|
51
|
+
|
52
|
+
a = 1.0
|
53
|
+
b = 100.0
|
54
|
+
smooth = 0.1
|
55
|
+
|
56
|
+
mean = NArray[0.0, 0.0]
|
57
|
+
stddev = NArray[10.0, 10.0]
|
58
|
+
|
59
|
+
problem = CrossEntropy::ContinuousProblem.new(mean, stddev)
|
60
|
+
problem.num_samples = 1000
|
61
|
+
problem.num_elite = 10
|
62
|
+
problem.max_iters = 300
|
63
|
+
|
64
|
+
problem.to_score_sample {|x| (a - x[0])**2 + b*(x[1] - x[0]**2)**2 }
|
65
|
+
|
66
|
+
problem.to_update {|new_mean, new_stddev|
|
67
|
+
smooth_mean = smooth*new_mean + (1 - smooth)*problem.param_mean
|
68
|
+
smooth_stddev = smooth*new_stddev + (1 - smooth)*problem.param_stddev
|
69
|
+
[smooth_mean, smooth_stddev]
|
70
|
+
}
|
71
|
+
|
72
|
+
problem.solve
|
73
|
+
|
74
|
+
assert_narray_close NArray[1.0, 1.0], problem.param_mean
|
75
|
+
assert problem.num_iters <= problem.max_iters
|
76
|
+
end
|
77
|
+
end
|
78
|
+
|
@@ -0,0 +1,149 @@
|
|
1
|
+
require 'cross_entropy'
|
2
|
+
require 'minitest/autorun'
|
3
|
+
|
4
|
+
class TestCrossEntropy < MiniTest::Test
|
5
|
+
# tolerance for numerical comparisons
|
6
|
+
DELTA = 1e-6
|
7
|
+
|
8
|
+
def assert_narray_close exp, obs
|
9
|
+
assert exp.shape == obs.shape && ((exp - obs).abs < DELTA).all?,
|
10
|
+
"#{exp.inspect} expected; got\n#{obs.inspect}"
|
11
|
+
end
|
12
|
+
|
13
|
+
def test_ce_estimate_ml
|
14
|
+
mp = CrossEntropy::MatrixProblem.new
|
15
|
+
mp.params = NArray.float(2, 4).fill!(0.5)
|
16
|
+
mp.num_samples = 50
|
17
|
+
mp.num_elite = 3
|
18
|
+
|
19
|
+
# Note that the number of columns in elite can be > num_elite due to ties.
|
20
|
+
elite = NArray[[1,0,0,0],
|
21
|
+
[0,1,0,0],
|
22
|
+
[0,0,1,0],
|
23
|
+
[0,0,0,1]]
|
24
|
+
pr_est = mp.estimate_ml(elite)
|
25
|
+
assert_equal [2, 4], pr_est.shape
|
26
|
+
assert_narray_close NArray[[0.75, 0.25],
|
27
|
+
[0.75, 0.25],
|
28
|
+
[0.75, 0.25],
|
29
|
+
[0.75, 0.25]], pr_est
|
30
|
+
|
31
|
+
# All samples the same.
|
32
|
+
elite = NArray[[0,0,0,0],
|
33
|
+
[1,1,1,1],
|
34
|
+
[0,0,0,0],
|
35
|
+
[0,0,0,0]]
|
36
|
+
pr_est = mp.estimate_ml(elite)
|
37
|
+
assert_equal [2, 4], pr_est.shape
|
38
|
+
assert_narray_close NArray[[1.0, 0.0],
|
39
|
+
[0.0, 1.0],
|
40
|
+
[1.0, 0.0],
|
41
|
+
[1.0, 0.0]], pr_est
|
42
|
+
end
|
43
|
+
|
44
|
+
def test_ce_most_likely_solution
|
45
|
+
mp = CrossEntropy::MatrixProblem.new
|
46
|
+
mp.params = NArray.float(4, 3).fill!(0.25)
|
47
|
+
mp.num_samples = 50
|
48
|
+
mp.num_elite = 3
|
49
|
+
|
50
|
+
# When there is a tie, the lowest value is taken.
|
51
|
+
assert_equal NArray[0,0,0], mp.most_likely_solution
|
52
|
+
|
53
|
+
mp.params = NArray[[0.0,0.0,0.0,1.0],
|
54
|
+
[1.0,0.0,0.0,0.0],
|
55
|
+
[0.2,0.2,0.2,0.4]]
|
56
|
+
assert_equal NArray[3,0,3], mp.most_likely_solution
|
57
|
+
|
58
|
+
mp.params = NArray[[0.0,0.0,1.0,0.0],
|
59
|
+
[0.0,1.0,0.0,0.0],
|
60
|
+
[0.1,0.3,0.4,0.2]]
|
61
|
+
assert_equal NArray[2,1,2], mp.most_likely_solution
|
62
|
+
end
|
63
|
+
|
64
|
+
#
|
65
|
+
# Example 1.2 from de Boer et al. 2005.
|
66
|
+
# The aim is to search for the given Boolean vector y_true.
|
67
|
+
# The MatrixProblem's default estimation rule is equivalent to equation (8).
|
68
|
+
#
|
69
|
+
def test_ce_deBoer_1
|
70
|
+
NArray.srand(567) # must use NArray's generator, not Ruby's
|
71
|
+
|
72
|
+
n = 10
|
73
|
+
y_true = NArray[1,1,1,1,1,0,0,0,0,0]
|
74
|
+
|
75
|
+
mp = CrossEntropy::MatrixProblem.new
|
76
|
+
mp.params = NArray.float(2, n).fill!(0.5)
|
77
|
+
mp.num_samples = 50
|
78
|
+
mp.num_elite = 5
|
79
|
+
mp.max_iters = 10
|
80
|
+
|
81
|
+
mp.to_score_sample do |sample|
|
82
|
+
y_true.eq(sample).count_false # to be minimized
|
83
|
+
end
|
84
|
+
|
85
|
+
mp.solve
|
86
|
+
|
87
|
+
if y_true != mp.most_likely_solution
|
88
|
+
warn "expected #{y_true}; found #{mp.most_likely_solution}"
|
89
|
+
end
|
90
|
+
assert mp.num_iters <= mp.max_iters
|
91
|
+
end
|
92
|
+
|
93
|
+
#
|
94
|
+
# Example 3.1 from de Boer et al. 2005.
|
95
|
+
# This is a max-cut problem.
|
96
|
+
# We also do some smoothing.
|
97
|
+
#
|
98
|
+
def test_ce_deBoer_2
|
99
|
+
NArray.srand(567) # must use NArray's generator, not Ruby's
|
100
|
+
|
101
|
+
# Cost matrix
|
102
|
+
n = 5
|
103
|
+
c = NArray[[0,1,3,5,6],
|
104
|
+
[1,0,3,6,5],
|
105
|
+
[3,3,0,2,2],
|
106
|
+
[5,6,2,0,2],
|
107
|
+
[6,5,2,2,0]]
|
108
|
+
|
109
|
+
mp = CrossEntropy::MatrixProblem.new
|
110
|
+
mp.params = NArray.float(2, n).fill!(0.5)
|
111
|
+
mp.params[true,0] = NArray[0.0,1.0] # put vertex 0 in subset 1
|
112
|
+
mp.num_samples = 50
|
113
|
+
mp.num_elite = 5
|
114
|
+
mp.max_iters = 10
|
115
|
+
smooth = 0.4
|
116
|
+
|
117
|
+
max_cut_score = proc do |sample|
|
118
|
+
weight = 0
|
119
|
+
for i in 0...n
|
120
|
+
for j in 0...n
|
121
|
+
weight += c[j,i] if sample[i] < sample[j]
|
122
|
+
end
|
123
|
+
end
|
124
|
+
-weight # to be minimized
|
125
|
+
end
|
126
|
+
best_cut = NArray[1,1,0,0,0]
|
127
|
+
assert_equal(-15, max_cut_score.call(NArray[1,0,0,0,0]))
|
128
|
+
assert_equal(-28, max_cut_score.call(best_cut))
|
129
|
+
|
130
|
+
mp.to_score_sample(&max_cut_score)
|
131
|
+
|
132
|
+
mp.to_update do |pr_iter|
|
133
|
+
smooth*pr_iter + (1 - smooth)*mp.params
|
134
|
+
end
|
135
|
+
|
136
|
+
mp.for_stop_decision do
|
137
|
+
#p mp.params
|
138
|
+
mp.num_iters >= mp.max_iters
|
139
|
+
end
|
140
|
+
|
141
|
+
mp.solve
|
142
|
+
|
143
|
+
if best_cut != mp.most_likely_solution
|
144
|
+
warn "expected #{best_cut}; found #{mp.most_likely_solution}"
|
145
|
+
end
|
146
|
+
assert mp.num_iters <= mp.max_iters
|
147
|
+
end
|
148
|
+
end
|
149
|
+
|
metadata
ADDED
@@ -0,0 +1,92 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: cross_entropy
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 1.0.0
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- John Lees-Miller
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
date: 2016-01-02 00:00:00.000000000 Z
|
12
|
+
dependencies:
|
13
|
+
- !ruby/object:Gem::Dependency
|
14
|
+
name: narray
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
16
|
+
requirements:
|
17
|
+
- - "~>"
|
18
|
+
- !ruby/object:Gem::Version
|
19
|
+
version: '0.6'
|
20
|
+
type: :runtime
|
21
|
+
prerelease: false
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
23
|
+
requirements:
|
24
|
+
- - "~>"
|
25
|
+
- !ruby/object:Gem::Version
|
26
|
+
version: '0.6'
|
27
|
+
- !ruby/object:Gem::Dependency
|
28
|
+
name: gemma
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
30
|
+
requirements:
|
31
|
+
- - "~>"
|
32
|
+
- !ruby/object:Gem::Version
|
33
|
+
version: '4.1'
|
34
|
+
type: :development
|
35
|
+
prerelease: false
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
37
|
+
requirements:
|
38
|
+
- - "~>"
|
39
|
+
- !ruby/object:Gem::Version
|
40
|
+
version: '4.1'
|
41
|
+
description: Includes solvers for continuous and discrete multivariate optimisation
|
42
|
+
problems.
|
43
|
+
email:
|
44
|
+
- jdleesmiller@gmail.com
|
45
|
+
executables: []
|
46
|
+
extensions: []
|
47
|
+
extra_rdoc_files:
|
48
|
+
- README.md
|
49
|
+
files:
|
50
|
+
- README.md
|
51
|
+
- lib/cross_entropy.rb
|
52
|
+
- lib/cross_entropy/abstract_problem.rb
|
53
|
+
- lib/cross_entropy/beta_problem.rb
|
54
|
+
- lib/cross_entropy/continuous_problem.rb
|
55
|
+
- lib/cross_entropy/matrix_problem.rb
|
56
|
+
- lib/cross_entropy/narray_extensions.rb
|
57
|
+
- lib/cross_entropy/version.rb
|
58
|
+
- test/cross_entropy/beta_problem_test.rb
|
59
|
+
- test/cross_entropy/continuous_problem_test.rb
|
60
|
+
- test/cross_entropy/cross_entropy_test.rb
|
61
|
+
homepage: https://github.com/jdleesmiller/cross_entropy
|
62
|
+
licenses: []
|
63
|
+
metadata: {}
|
64
|
+
post_install_message:
|
65
|
+
rdoc_options:
|
66
|
+
- "--main"
|
67
|
+
- README.md
|
68
|
+
- "--title"
|
69
|
+
- cross_entropy-1.0.0 Documentation
|
70
|
+
require_paths:
|
71
|
+
- lib
|
72
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
73
|
+
requirements:
|
74
|
+
- - ">="
|
75
|
+
- !ruby/object:Gem::Version
|
76
|
+
version: '0'
|
77
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
78
|
+
requirements:
|
79
|
+
- - ">="
|
80
|
+
- !ruby/object:Gem::Version
|
81
|
+
version: '0'
|
82
|
+
requirements: []
|
83
|
+
rubyforge_project:
|
84
|
+
rubygems_version: 2.4.6
|
85
|
+
signing_key:
|
86
|
+
specification_version: 4
|
87
|
+
summary: Solve optimisation problems with the Cross Entropy Method.
|
88
|
+
test_files:
|
89
|
+
- test/cross_entropy/beta_problem_test.rb
|
90
|
+
- test/cross_entropy/continuous_problem_test.rb
|
91
|
+
- test/cross_entropy/cross_entropy_test.rb
|
92
|
+
has_rdoc:
|