cross_entropy 1.0.0 → 1.1.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA1:
3
- metadata.gz: 04a027fd6b1ff464e5845bd8a85657fda3cef628
4
- data.tar.gz: 1e542c3df5f6a36d2f13c564c3dfcb46d5e2d69f
3
+ metadata.gz: 64569dfb8741ff5fae2bb7ed5176b273e8a3ba34
4
+ data.tar.gz: a8b475c6a7898d692c0ce984ef41a3f2187e11e5
5
5
  SHA512:
6
- metadata.gz: 040644e34aed5dbe019789af16ba95226ac4e9c09e8be119d76aff5cf5d30946b286cfe0cdd7c700356217389c4b90060fcb564856e3f618a20fff6830e7fca6
7
- data.tar.gz: a57a8845f9e8e3e56bc1c91ccda84bb6961d861fb69ef7f1c8c1b4518e3561b7fcdeb8ff410a19905b4971eba4f9a84a1fcc01d66f8d62896b7e55cca2a11e16
6
+ metadata.gz: f1231183cdac6d29c90708d0781465aeee2ac996f02075026610b8f781a2241c41a2edb71010b2944b5e65b8493dd4c120f6db328d46f9c4cdfbb067e8cc70bc
7
+ data.tar.gz: bf1737897c3e6afc7e36d3bd90c0ce2273b9beb3da6ddc0f39f1308ac58aa5c5e207b820b49e3d0411af3580847dcdb5c0eefbd593543da3e6e8951ff2258573
data/README.md CHANGED
@@ -1,8 +1,9 @@
1
1
  # cross_entropy
2
2
 
3
3
  [![Build Status](https://travis-ci.org/jdleesmiller/cross_entropy.svg?branch=master)](https://travis-ci.org/jdleesmiller/cross_entropy)
4
+ [![Gem Version](https://badge.fury.io/rb/cross_entropy.svg)](https://badge.fury.io/rb/cross_entropy)
4
5
 
5
- https://github.com/jdleesmiller/cross_entropy
6
+ https://github.com/jdleesmiller/cross_entropy
6
7
 
7
8
  ## SYNOPSIS
8
9
 
@@ -10,12 +11,12 @@ Implementations of the [Cross Entropy Method](https://en.wikipedia.org/wiki/Cros
10
11
 
11
12
  ### What is the Cross Entropy method?
12
13
 
13
- It's basically like a [genetic algorithm](https://en.wikipedia.org/wiki/Genetic_algorithm) without the biological stuff. Instead, it works on nice, pure probability distributions. You start by specifying a probability distribution for the optimal values, based on your initial guess. The CEM then
14
+ It's basically like a [genetic algorithm](https://en.wikipedia.org/wiki/Genetic_algorithm) without the biological analogy. Instead, it uses probability distributions. You start by specifying a probability distribution for the optimal values, based on your initial guess. The CEM then
14
15
  - generates samples based on that distribution,
15
16
  - scores them according to the objective function, and
16
- - uses the highest-scoring samples to update the parameters of the probability distribution, so it converges on an optimal value.
17
+ - uses the lowest-scoring samples (that is, this library assumes that we want to minimize the objective function) to update the parameters of the probability distribution, so it converges on an optimal value.
17
18
 
18
- It has relatively few tunable parameters, and it automatically balances diversification and intensification. It is robust to noise in the objective function, so it is very useful for parameter tuning and simulation work.
19
+ It has relatively few tuneable parameters, and it automatically balances diversification and intensification. It is robust to noise in the objective function, so it is very useful for parameter tuning and simulation work.
19
20
 
20
21
  ### Supported problem types
21
22
 
@@ -72,11 +73,20 @@ problem.solve
72
73
 
73
74
  gem install cross_entropy
74
75
 
76
+ ## HISTORY
77
+
78
+ ### 1.1.0 - 6 May 2017
79
+
80
+ - Linted with rubocop
81
+ - Improved test coverage
82
+ - Added recent rubies in CI
83
+ - Improved README
84
+
75
85
  ## LICENSE
76
86
 
77
87
  (The MIT License)
78
88
 
79
- Copyright (c) 2015 John Lees-Miller
89
+ Copyright (c) 2017 John Lees-Miller
80
90
 
81
91
  Permission is hereby granted, free of charge, to any person obtaining
82
92
  a copy of this software and associated documentation files (the
@@ -96,4 +106,3 @@ IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
96
106
  CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
97
107
  TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
98
108
  SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
99
-
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  require 'cross_entropy/version'
2
4
 
3
5
  require 'narray'
@@ -7,4 +9,3 @@ require 'cross_entropy/abstract_problem'
7
9
  require 'cross_entropy/matrix_problem'
8
10
  require 'cross_entropy/continuous_problem'
9
11
  require 'cross_entropy/beta_problem'
10
-
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  module CrossEntropy
2
4
  #
3
5
  # Base class for specific problem types.
@@ -6,7 +8,7 @@ module CrossEntropy
6
8
  #
7
9
  # @param [Array] params
8
10
  #
9
- def initialize params
11
+ def initialize(params)
10
12
  @params = params
11
13
 
12
14
  @max_iters = nil
@@ -14,14 +16,14 @@ module CrossEntropy
14
16
  @overall_min_score = 1.0 / 0.0
15
17
  @overall_min_score_sample = nil
16
18
 
17
- @generate_samples = proc { raise "no generating function provided" }
18
- @score_sample = proc {|sample| raise "no score function provided" }
19
- @estimate = proc {|elite| raise "no estimate function provided" }
20
- @update = proc {|estimated_params| estimated_params }
21
- @stop_decision = proc {
22
- raise "no max_iters provided" unless self.max_iters
23
- self.num_iters >= self.max_iters
24
- }
19
+ @generate_samples = proc { raise 'no generating function provided' }
20
+ @score_sample = proc { |_sample| raise 'no score block provided' }
21
+ @estimate = proc { |_elite| raise 'no estimate block provided' }
22
+ @update = proc { |estimated_params| estimated_params }
23
+ @stop_decision = proc do
24
+ raise 'no max_iters provided' unless max_iters
25
+ num_iters >= max_iters
26
+ end
25
27
 
26
28
  yield(self) if block_given?
27
29
  end
@@ -32,15 +34,25 @@ module CrossEntropy
32
34
  attr_accessor :num_elite
33
35
  attr_accessor :max_iters
34
36
 
35
- def to_generate_samples █ @generate_samples = block end
37
+ def to_generate_samples(&block)
38
+ @generate_samples = block
39
+ end
36
40
 
37
- def to_score_sample █ @score_sample = block end
41
+ def to_score_sample(&block)
42
+ @score_sample = block
43
+ end
38
44
 
39
- def to_estimate █ @estimate = block end
45
+ def to_estimate(&block)
46
+ @estimate = block
47
+ end
40
48
 
41
- def to_update █ @update = block end
49
+ def to_update(&block)
50
+ @update = block
51
+ end
42
52
 
43
- def for_stop_decision █ @stop_decision = block end
53
+ def for_stop_decision(&block)
54
+ @stop_decision = block
55
+ end
44
56
 
45
57
  attr_reader :num_iters
46
58
  attr_reader :min_score
@@ -58,16 +70,16 @@ module CrossEntropy
58
70
  def solve
59
71
  @num_iters = 0
60
72
 
61
- begin
73
+ loop do
62
74
  @min_score = nil
63
75
  @elite_score = nil
64
76
 
65
77
  samples = @generate_samples.call
66
78
 
67
79
  # Score each sample.
68
- scores = NArray.float(self.num_samples)
69
- for i in 0...self.num_samples
70
- sample_i = samples[i,true]
80
+ scores = NArray.float(num_samples)
81
+ (0...num_samples).each do |i|
82
+ sample_i = samples[i, true]
71
83
  score_i = @score_sample.call(sample_i)
72
84
 
73
85
  # Keep track of best ever if requested.
@@ -82,7 +94,7 @@ module CrossEntropy
82
94
  # Find elite quantile (gamma).
83
95
  scores_sorted = scores.sort
84
96
  @min_score = scores_sorted[0]
85
- @elite_score = scores_sorted[self.num_elite-1]
97
+ @elite_score = scores_sorted[num_elite - 1]
86
98
 
87
99
  # Take all samples with scores below (or equal to) gamma; note that
88
100
  # there may be more than num_elite, due to ties.
@@ -95,8 +107,8 @@ module CrossEntropy
95
107
  self.params = @update.call(estimated_params)
96
108
 
97
109
  @num_iters += 1
98
- end until @stop_decision.call
110
+ break if @stop_decision.call
111
+ end
99
112
  end
100
113
  end
101
114
  end
102
-
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  module CrossEntropy
2
4
  #
3
5
  # Solve a continuous optimisation problem in which the variables are bounded
@@ -8,25 +10,30 @@ module CrossEntropy
8
10
  class BetaProblem < AbstractProblem
9
11
  include NMath
10
12
 
11
- def initialize alpha, beta
13
+ def initialize(alpha, beta)
12
14
  super [alpha, beta]
13
15
 
14
- @generate_samples = proc { self.generate_beta_samples }
15
- @estimate = proc {|elite| self.estimate_mom(elite) }
16
+ to_generate_samples { generate_beta_samples }
17
+ to_estimate { |elite| estimate_mom(elite) }
16
18
 
17
19
  yield(self) if block_given?
18
20
  end
19
21
 
20
- def param_alpha; params[0] end
21
- def param_beta; params[1] end
22
+ def param_alpha
23
+ params[0]
24
+ end
25
+
26
+ def param_beta
27
+ params[1]
28
+ end
22
29
 
23
30
  #
24
31
  # Generate samples.
25
32
  #
26
33
  def generate_beta_samples
27
- NArray[*param_alpha.to_a.zip(param_beta.to_a).map {|alpha, beta|
34
+ NArray[*param_alpha.to_a.zip(param_beta.to_a).map do |alpha, beta|
28
35
  generate_beta_sample(alpha, beta)
29
- }]
36
+ end]
30
37
  end
31
38
 
32
39
  #
@@ -42,7 +49,7 @@ module CrossEntropy
42
49
  #
43
50
  # @return [Array] the estimated parameter arrays
44
51
  #
45
- def estimate_mom elite
52
+ def estimate_mom(elite)
46
53
  mean = elite.mean(0)
47
54
  variance = elite.stddev(0)**2
48
55
 
@@ -61,15 +68,14 @@ module CrossEntropy
61
68
 
62
69
  private
63
70
 
64
- def generate_erlang_samples k
71
+ def generate_erlang_samples(k)
65
72
  -log(NArray.float(k, num_samples).random).sum(0)
66
73
  end
67
74
 
68
- def generate_beta_sample alpha, beta
75
+ def generate_beta_sample(alpha, beta)
69
76
  a = generate_erlang_samples(alpha)
70
77
  b = generate_erlang_samples(beta)
71
78
  a / (a + b)
72
79
  end
73
80
  end
74
81
  end
75
-
@@ -1,22 +1,31 @@
1
+ # frozen_string_literal: true
2
+
1
3
  module CrossEntropy
2
4
  #
3
5
  # Solve a continuous optimisation problem. The sampling distribution of each
4
6
  # parameter is assumed to be a 1D Gaussian with given mean and variance.
5
7
  #
6
8
  class ContinuousProblem < AbstractProblem
7
- def initialize mean, stddev
9
+ def initialize(mean, stddev)
8
10
  super [mean, stddev]
9
11
 
10
- @generate_samples = proc { self.generate_gaussian_samples }
11
- @estimate = proc {|elite| self.estimate_ml(elite) }
12
+ to_generate_samples { generate_gaussian_samples }
13
+ to_estimate { |elite| estimate_ml(elite) }
12
14
 
13
15
  yield(self) if block_given?
14
16
  end
15
17
 
16
- def param_mean; params[0] end
17
- def param_stddev; params[1] end
18
+ def param_mean
19
+ params[0]
20
+ end
21
+
22
+ def param_stddev
23
+ params[1]
24
+ end
18
25
 
19
- def sample_shape; param_mean.shape end
26
+ def sample_shape
27
+ param_mean.shape
28
+ end
20
29
 
21
30
  #
22
31
  # Generate samples.
@@ -36,9 +45,8 @@ module CrossEntropy
36
45
  #
37
46
  # @return [Array] the estimated parameter arrays
38
47
  #
39
- def estimate_ml elite
48
+ def estimate_ml(elite)
40
49
  [elite.mean(0), elite.stddev(0)]
41
50
  end
42
51
  end
43
52
  end
44
-
@@ -1,3 +1,5 @@
1
+ # frozen_string_literal: true
2
+
1
3
  module CrossEntropy
2
4
  #
3
5
  # Assuming that the data are probabilities in an NArray (say dim 1 or dim 2
@@ -12,24 +14,29 @@ module CrossEntropy
12
14
  def initialize(params = nil)
13
15
  super(params)
14
16
 
15
- # Configurable procs.
16
- @generate_samples = proc { self.generate_samples_directly }
17
- @estimate = proc {|elite| self.estimate_ml(elite) }
18
- @update = proc {|pr_est| pr_est }
17
+ to_generate_samples { generate_samples_directly }
18
+ to_estimate { |elite| estimate_ml(elite) }
19
+
20
+ yield(self) if block_given?
21
+ end
22
+
23
+ def num_variables
24
+ @params.shape[1]
19
25
  end
20
26
 
21
- def num_variables; @params.shape[1] end
22
- def num_values; @params.shape[0] end
27
+ def num_values
28
+ @params.shape[0]
29
+ end
23
30
 
24
31
  #
25
- # Generate samples directly from the probabilities matrix {#pr}.
32
+ # Generate samples directly from the probabilities matrix {#params}.
26
33
  #
27
34
  # If your problem is tightly constrained, you may want to provide a custom
28
35
  # sample generation routine that avoids infeasible solutions; see
29
36
  # {#to_generate_samples}.
30
37
  #
31
38
  def generate_samples_directly
32
- self.params.tile(1,1,self.num_samples).sample_pmf_dim.transpose(1,0)
39
+ params.tile(1, 1, num_samples).sample_pmf_dim.transpose(1, 0)
33
40
  end
34
41
 
35
42
  #
@@ -45,12 +52,12 @@ module CrossEntropy
45
52
  # @return [NArray] {#num_variables} rows; {#num_values} columns; entries are
46
53
  # non-negative floats in [0,1] and sum to 1
47
54
  #
48
- def estimate_ml elite
49
- pr_est = NArray.float(self.num_values, self.num_variables)
50
- for i in 0...num_variables
51
- elite_i = elite[true,i]
52
- for j in 0...num_values
53
- pr_est[j,i] = elite_i.eq(j).count_true
55
+ def estimate_ml(elite)
56
+ pr_est = NArray.float(num_values, num_variables)
57
+ (0...num_variables).each do |i|
58
+ elite_i = elite[true, i]
59
+ (0...num_values).each do |j|
60
+ pr_est[j, i] = elite_i.eq(j).count_true
54
61
  end
55
62
  end
56
63
  pr_est /= elite.shape[0]
@@ -61,19 +68,19 @@ module CrossEntropy
61
68
  # Find most likely solution so far based on given probabilities.
62
69
  #
63
70
  # @param [NArray] pr probability matrix with {#num_variables} rows and
64
- # {#num_values} columns; if not specified, the current {#pr} matrix is used
71
+ # {#num_values} columns; if not specified, the current {#params} matrix is
72
+ # used
65
73
  #
66
74
  # @return [Narray] column vector with {#num_variables} integer entries in
67
75
  # [0, {#num_values})
68
76
  #
69
- def most_likely_solution pr=self.params
70
- pr_eq = pr.eq(pr.max(0).tile(1,pr.shape[0]).transpose(1,0))
77
+ def most_likely_solution(pr = params)
78
+ pr_eq = pr.eq(pr.max(0).tile(1, pr.shape[0]).transpose(1, 0))
71
79
  pr_ml = NArray.int(pr_eq.shape[1])
72
- for i in 0...pr_eq.shape[1]
73
- pr_ml[i] = pr_eq[true,i].where[0]
80
+ (0...pr_eq.shape[1]).each do |i|
81
+ pr_ml[i] = pr_eq[true, i].where[0]
74
82
  end
75
83
  pr_ml
76
84
  end
77
85
  end
78
86
  end
79
-
@@ -1,9 +1,12 @@
1
+ # frozen_string_literal: true
2
+
1
3
  module CrossEntropy
2
4
  #
3
5
  # Some extensions to NArray.
4
6
  #
5
7
  # Note that I've opened a pull request for general cumsum and tile, but it's
6
- # still open without comment after three years, so maybe they don't like them.
8
+ # still open without comment after three years, and I think they have stopped
9
+ # working on this version of narray.
7
10
  # https://github.com/masa16/narray/pull/7
8
11
  #
9
12
  module NArrayExtensions
@@ -15,21 +18,21 @@ module CrossEntropy
15
18
  #
16
19
  # @return [NArray] self
17
20
  #
18
- def cumsum_general! dim=0
21
+ def cumsum_general!(dim = 0)
19
22
  if self.dim > dim
20
23
  if self.dim == 1
21
24
  # use the built-in version for dimension 1
22
- self.cumsum_1!
25
+ cumsum_1!
23
26
  else
24
- # for example, if this is a matrix and dim = 0, mask_0 selects the
25
- # first column of the matrix and mask_1 selects the second column;
27
+ # for example, if this is a matrix and dim = 0, mask0 selects the
28
+ # first column of the matrix and mask1 selects the second column;
26
29
  # then we just shuffle them along and accumulate.
27
- mask_0 = (0...self.dim).map{|d| d == dim ? 0 : true}
28
- mask_1 = (0...self.dim).map{|d| d == dim ? 1 : true}
29
- while mask_1[dim] < self.shape[dim]
30
- self[*mask_1] += self[*mask_0]
31
- mask_0[dim] += 1
32
- mask_1[dim] += 1
30
+ mask0 = (0...self.dim).map { |d| d == dim ? 0 : true }
31
+ mask1 = (0...self.dim).map { |d| d == dim ? 1 : true }
32
+ while mask1[dim] < shape[dim]
33
+ self[*mask1] += self[*mask0]
34
+ mask0[dim] += 1
35
+ mask1[dim] += 1
33
36
  end
34
37
  end
35
38
  end
@@ -43,15 +46,15 @@ module CrossEntropy
43
46
  #
44
47
  # @return [NArray]
45
48
  #
46
- def cumsum_general dim=0
47
- self.dup.cumsum_general!(dim)
49
+ def cumsum_general(dim = 0)
50
+ dup.cumsum_general!(dim)
48
51
  end
49
52
 
50
53
  # The built-in cumsum only does vectors (dim 1).
51
- alias cumsum_1 cumsum
52
- alias cumsum cumsum_general
53
- alias cumsum_1! cumsum!
54
- alias cumsum! cumsum_general!
54
+ alias_method :cumsum_1, :cumsum
55
+ alias_method :cumsum, :cumsum_general
56
+ alias_method :cumsum_1!, :cumsum!
57
+ alias_method :cumsum!, :cumsum_general!
55
58
 
56
59
  #
57
60
  # Replicate this array to make a tiled array; this is the matlab function
@@ -63,31 +66,34 @@ module CrossEntropy
63
66
  #
64
67
  # @return [NArray] with same typecode as self
65
68
  #
66
- def tile *reps
67
- if self.dim == 0 || reps.member?(0)
69
+ def tile(*reps)
70
+ if dim == 0 || reps.member?(0)
68
71
  # Degenerate case: 0 dimensions or dimension 0
69
- res = NArray.new(self.typecode, 0)
72
+ res = NArray.new(typecode, 0)
70
73
  else
71
- if reps.size <= self.dim
74
+ if reps.size <= dim
72
75
  # Repeat any extra dims once.
73
- reps = reps + [1]*(self.dim - reps.size)
76
+ reps += [1] * (dim - reps.size)
74
77
  tile = self
75
78
  else
76
79
  # Have to add some more dimensions (with implicit shape[dim] = 1).
77
- tile_shape = self.shape + [1]*(reps.size - self.dim)
78
- tile = self.reshape(*tile_shape)
80
+ tile_shape = shape + [1] * (reps.size - dim)
81
+ tile = reshape(*tile_shape)
79
82
  end
80
83
 
81
84
  # Allocate tiled matrix.
82
- res_shape = (0...tile.dim).map{|i| tile.shape[i] * reps[i]}
83
- res = NArray.new(self.typecode, *res_shape)
85
+ res_shape = (0...tile.dim).map { |i| tile.shape[i] * reps[i] }
86
+ res = NArray.new(typecode, *res_shape)
84
87
 
85
88
  # Copy tiles.
86
89
  # This probably isn't the most efficient way of doing this; just doing
87
90
  # res[] = tile doesn't seem to work in general
88
91
  nested_for_zero_to(reps) do |tile_pos|
89
- tile_slice = (0...tile.dim).map{|i|
90
- (tile.shape[i] * tile_pos[i])...(tile.shape[i] * (tile_pos[i]+1))}
92
+ tile_slice = (0...tile.dim).map do |i|
93
+ start_index = tile.shape[i] * tile_pos[i]
94
+ end_index = tile.shape[i] * (tile_pos[i] + 1)
95
+ start_index...end_index
96
+ end
91
97
  res[*tile_slice] = tile
92
98
  end
93
99
  end
@@ -106,11 +112,17 @@ module CrossEntropy
106
112
  # @return [Array<Integer>] subscript corresponding to the given linear
107
113
  # index; this is the same size as +shape+
108
114
  #
109
- def index_to_subscript index
110
- raise IndexError.new("out of bounds: index=#{index} for shape=#{
111
- self.shape.inspect}") if index >= self.size
115
+ def index_to_subscript(index)
116
+ if index >= size
117
+ raise \
118
+ IndexError,
119
+ "out of bounds: index=#{index} for shape=#{shape.inspect}"
120
+ end
112
121
 
113
- self.shape.map {|s| index, r = index.divmod(s); r }
122
+ shape.map do |s|
123
+ index, r = index.divmod(s)
124
+ r
125
+ end
114
126
  end
115
127
 
116
128
  #
@@ -130,8 +142,8 @@ module CrossEntropy
130
142
  # @return [Array<Integer>] subscripts of a randomly selected into the
131
143
  # array; this is the same size as +shape+
132
144
  #
133
- def sample_pmf r=nil
134
- self.index_to_subscript(self.flatten.sample_pmf_dim(0, r))
145
+ def sample_pmf(r = nil)
146
+ index_to_subscript(flatten.sample_pmf_dim(0, r))
135
147
  end
136
148
 
137
149
  #
@@ -155,8 +167,8 @@ module CrossEntropy
155
167
  #
156
168
  # @return [NArray] integer subscripts
157
169
  #
158
- def sample_pmf_dim dim=0, r=nil
159
- self.cumsum(dim).sample_cdf_dim(dim, r)
170
+ def sample_pmf_dim(dim = 0, r = nil)
171
+ cumsum(dim).sample_cdf_dim(dim, r)
160
172
  end
161
173
 
162
174
  #
@@ -173,12 +185,12 @@ module CrossEntropy
173
185
  #
174
186
  # @return [NArray] integer subscripts
175
187
  #
176
- def sample_cdf_dim dim=0, r=nil
188
+ def sample_cdf_dim(dim = 0, r = nil)
177
189
  raise 'self.dim must be > dim' unless self.dim > dim
178
190
 
179
191
  # generate random sample, unless one was given for testing
180
- r_shape = (0...self.dim).map {|i| i == dim ? 1 : self.shape[i]}
181
- r = NArray.new(self.typecode, *r_shape).random! unless r
192
+ r_shape = (0...self.dim).map { |i| i == dim ? 1 : shape[i] }
193
+ r = NArray.new(typecode, *r_shape).random! unless r
182
194
 
183
195
  # allocate space for results -- same size as the random sample
184
196
  res = NArray.int(*r_shape)
@@ -187,9 +199,10 @@ module CrossEntropy
187
199
  # threshold
188
200
  nested_for_zero_to(r_shape) do |slice|
189
201
  r_thresh = r[*slice]
190
- res[*slice] = self.shape[dim] - 1 # default to last
202
+ res[*slice] = shape[dim] - 1 # default to last
191
203
  self_slice = slice.dup
192
- for self_slice[dim] in 0...self.shape[dim]
204
+ (0...shape[dim]).each do |i|
205
+ self_slice[dim] = i
193
206
  if r_thresh < self[*self_slice]
194
207
  res[*slice] = self_slice[dim]
195
208
  break
@@ -197,7 +210,7 @@ module CrossEntropy
197
210
  end
198
211
  end
199
212
 
200
- res[*(0...self.dim).map {|i| i == dim ? 0 : true}]
213
+ res[*(0...self.dim).map { |i| i == dim ? 0 : true }]
201
214
  end
202
215
 
203
216
  private
@@ -216,9 +229,9 @@ module CrossEntropy
216
229
  #
217
230
  # @return [nil]
218
231
  #
219
- def nested_for_zero_to suprema
232
+ def nested_for_zero_to(suprema)
220
233
  unless suprema.empty?
221
- nums = suprema.map{|n| (0...n).to_a}
234
+ nums = suprema.map { |n| (0...n).to_a }
222
235
  nums.first.product(*nums.drop(1)).each do |num|
223
236
  yield num
224
237
  end
@@ -1,6 +1,8 @@
1
+ # frozen_string_literal: true
2
+
1
3
  module CrossEntropy
2
4
  VERSION_MAJOR = 1
3
- VERSION_MINOR = 0
5
+ VERSION_MINOR = 1
4
6
  VERSION_PATCH = 0
5
7
  VERSION = [VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH].join('.')
6
8
  end
@@ -1,13 +1,14 @@
1
- require 'cross_entropy'
2
- require 'minitest/autorun'
1
+ # frozen_string_literal: true
3
2
 
4
- class TestBetaProblem < MiniTest::Test
5
- # tolerance for numerical comparisons
6
- DELTA = 1e-3
3
+ require_relative 'test_helper'
7
4
 
8
- def assert_narray_close exp, obs
9
- assert exp.shape == obs.shape && ((exp - obs).abs < DELTA).all?,
10
- "#{exp.inspect} expected; got\n#{obs.inspect}"
5
+ class TestBetaProblem < CrossEntropyTest
6
+ #
7
+ # Numerical tolerance for comparison. We would have to run for a long time to
8
+ # get within the default tolerance of 10^-6, so use a less strict tolerance.
9
+ #
10
+ def delta
11
+ 1e-3
11
12
  end
12
13
 
13
14
  #
@@ -30,13 +31,13 @@ class TestBetaProblem < MiniTest::Test
30
31
  problem.num_elite = 10
31
32
  problem.max_iters = 10
32
33
 
33
- problem.to_score_sample {|x| (a - x[0])**2 + b*(x[1] - x[0]**2)**2 }
34
+ problem.to_score_sample { |x| (a - x[0])**2 + b * (x[1] - x[0]**2)**2 }
34
35
 
35
- problem.to_update {|new_alpha, new_beta|
36
- smooth_alpha = smooth*new_alpha + (1 - smooth)*problem.param_alpha
37
- smooth_beta = smooth*new_beta + (1 - smooth)*problem.param_beta
36
+ problem.to_update do |new_alpha, new_beta|
37
+ smooth_alpha = smooth * new_alpha + (1 - smooth) * problem.param_alpha
38
+ smooth_beta = smooth * new_beta + (1 - smooth) * problem.param_beta
38
39
  [smooth_alpha, smooth_beta]
39
- }
40
+ end
40
41
 
41
42
  problem.solve
42
43
 
@@ -1,27 +1,20 @@
1
- require 'cross_entropy'
2
- require 'minitest/autorun'
1
+ # frozen_string_literal: true
3
2
 
4
- class TestContinuousProblem < MiniTest::Test
5
- # tolerance for numerical comparisons
6
- DELTA = 1e-6
3
+ require_relative 'test_helper'
7
4
 
5
+ class TestContinuousProblem < CrossEntropyTest
8
6
  include NMath
9
7
 
10
- def assert_narray_close exp, obs
11
- assert exp.shape == obs.shape && ((exp - obs).abs < DELTA).all?,
12
- "#{exp.inspect} expected; got\n#{obs.inspect}"
13
- end
14
-
15
8
  #
16
9
  # Example 3.1 from Kroese et al. 2006.
17
10
  #
18
- # Maximise $e^{-(x-2)^2} + 0.8 e^{(x+2)^2}$ for real $x$. The function has a
11
+ # Maximise $e^{-(x-2)^2} + 0.8 e^{-(x+2)^2}$ for real $x$. The function has a
19
12
  # global maximum at x = 2 and a local maximum at x = -2, which we should
20
13
  # avoid.
21
14
  #
22
15
  # (This is also the example on Wikipedia.)
23
16
  #
24
- def test_Kroese_3_1
17
+ def test_kroese_3_1
25
18
  NArray.srand(567) # must use NArray's generator, not Ruby's
26
19
 
27
20
  mean = NArray[0.0]
@@ -33,7 +26,7 @@ class TestContinuousProblem < MiniTest::Test
33
26
  problem.max_iters = 100
34
27
 
35
28
  # NB: maximising
36
- problem.to_score_sample {|x| -(exp(-(x-2)**2) + 0.8 * exp(-(x+2)**2)) }
29
+ problem.to_score_sample { |x| -(exp(-(x - 2)**2) + 0.8 * exp(-(x + 2)**2)) }
37
30
 
38
31
  problem.solve
39
32
 
@@ -61,13 +54,13 @@ class TestContinuousProblem < MiniTest::Test
61
54
  problem.num_elite = 10
62
55
  problem.max_iters = 300
63
56
 
64
- problem.to_score_sample {|x| (a - x[0])**2 + b*(x[1] - x[0]**2)**2 }
57
+ problem.to_score_sample { |x| (a - x[0])**2 + b * (x[1] - x[0]**2)**2 }
65
58
 
66
- problem.to_update {|new_mean, new_stddev|
67
- smooth_mean = smooth*new_mean + (1 - smooth)*problem.param_mean
68
- smooth_stddev = smooth*new_stddev + (1 - smooth)*problem.param_stddev
59
+ problem.to_update do |new_mean, new_stddev|
60
+ smooth_mean = smooth * new_mean + (1 - smooth) * problem.param_mean
61
+ smooth_stddev = smooth * new_stddev + (1 - smooth) * problem.param_stddev
69
62
  [smooth_mean, smooth_stddev]
70
- }
63
+ end
71
64
 
72
65
  problem.solve
73
66
 
@@ -75,4 +68,3 @@ class TestContinuousProblem < MiniTest::Test
75
68
  assert problem.num_iters <= problem.max_iters
76
69
  end
77
70
  end
78
-
@@ -1,15 +1,8 @@
1
- require 'cross_entropy'
2
- require 'minitest/autorun'
1
+ # frozen_string_literal: true
3
2
 
4
- class TestCrossEntropy < MiniTest::Test
5
- # tolerance for numerical comparisons
6
- DELTA = 1e-6
7
-
8
- def assert_narray_close exp, obs
9
- assert exp.shape == obs.shape && ((exp - obs).abs < DELTA).all?,
10
- "#{exp.inspect} expected; got\n#{obs.inspect}"
11
- end
3
+ require_relative 'test_helper'
12
4
 
5
+ class TestCrossEntropy < CrossEntropyTest
13
6
  def test_ce_estimate_ml
14
7
  mp = CrossEntropy::MatrixProblem.new
15
8
  mp.params = NArray.float(2, 4).fill!(0.5)
@@ -17,10 +10,10 @@ class TestCrossEntropy < MiniTest::Test
17
10
  mp.num_elite = 3
18
11
 
19
12
  # Note that the number of columns in elite can be > num_elite due to ties.
20
- elite = NArray[[1,0,0,0],
21
- [0,1,0,0],
22
- [0,0,1,0],
23
- [0,0,0,1]]
13
+ elite = NArray[[1, 0, 0, 0],
14
+ [0, 1, 0, 0],
15
+ [0, 0, 1, 0],
16
+ [0, 0, 0, 1]]
24
17
  pr_est = mp.estimate_ml(elite)
25
18
  assert_equal [2, 4], pr_est.shape
26
19
  assert_narray_close NArray[[0.75, 0.25],
@@ -29,10 +22,10 @@ class TestCrossEntropy < MiniTest::Test
29
22
  [0.75, 0.25]], pr_est
30
23
 
31
24
  # All samples the same.
32
- elite = NArray[[0,0,0,0],
33
- [1,1,1,1],
34
- [0,0,0,0],
35
- [0,0,0,0]]
25
+ elite = NArray[[0, 0, 0, 0],
26
+ [1, 1, 1, 1],
27
+ [0, 0, 0, 0],
28
+ [0, 0, 0, 0]]
36
29
  pr_est = mp.estimate_ml(elite)
37
30
  assert_equal [2, 4], pr_est.shape
38
31
  assert_narray_close NArray[[1.0, 0.0],
@@ -48,17 +41,17 @@ class TestCrossEntropy < MiniTest::Test
48
41
  mp.num_elite = 3
49
42
 
50
43
  # When there is a tie, the lowest value is taken.
51
- assert_equal NArray[0,0,0], mp.most_likely_solution
44
+ assert_equal NArray[0, 0, 0], mp.most_likely_solution
52
45
 
53
- mp.params = NArray[[0.0,0.0,0.0,1.0],
54
- [1.0,0.0,0.0,0.0],
55
- [0.2,0.2,0.2,0.4]]
56
- assert_equal NArray[3,0,3], mp.most_likely_solution
46
+ mp.params = NArray[[0.0, 0.0, 0.0, 1.0],
47
+ [1.0, 0.0, 0.0, 0.0],
48
+ [0.2, 0.2, 0.2, 0.4]]
49
+ assert_equal NArray[3, 0, 3], mp.most_likely_solution
57
50
 
58
- mp.params = NArray[[0.0,0.0,1.0,0.0],
59
- [0.0,1.0,0.0,0.0],
60
- [0.1,0.3,0.4,0.2]]
61
- assert_equal NArray[2,1,2], mp.most_likely_solution
51
+ mp.params = NArray[[0.0, 0.0, 1.0, 0.0],
52
+ [0.0, 1.0, 0.0, 0.0],
53
+ [0.1, 0.3, 0.4, 0.2]]
54
+ assert_equal NArray[2, 1, 2], mp.most_likely_solution
62
55
  end
63
56
 
64
57
  #
@@ -66,17 +59,18 @@ class TestCrossEntropy < MiniTest::Test
66
59
  # The aim is to search for the given Boolean vector y_true.
67
60
  # The MatrixProblem's default estimation rule is equivalent to equation (8).
68
61
  #
69
- def test_ce_deBoer_1
62
+ def test_ce_deboer_1
70
63
  NArray.srand(567) # must use NArray's generator, not Ruby's
71
64
 
72
65
  n = 10
73
- y_true = NArray[1,1,1,1,1,0,0,0,0,0]
66
+ y_true = NArray[1, 1, 1, 1, 1, 0, 0, 0, 0, 0]
74
67
 
75
68
  mp = CrossEntropy::MatrixProblem.new
76
69
  mp.params = NArray.float(2, n).fill!(0.5)
77
70
  mp.num_samples = 50
78
71
  mp.num_elite = 5
79
72
  mp.max_iters = 10
73
+ mp.track_overall_min = true
80
74
 
81
75
  mp.to_score_sample do |sample|
82
76
  y_true.eq(sample).count_false # to be minimized
@@ -87,6 +81,11 @@ class TestCrossEntropy < MiniTest::Test
87
81
  if y_true != mp.most_likely_solution
88
82
  warn "expected #{y_true}; found #{mp.most_likely_solution}"
89
83
  end
84
+
85
+ if y_true != mp.overall_min_score_sample
86
+ warn "expected overall #{y_true}; found #{mp.overall_min_score_sample}"
87
+ end
88
+
90
89
  assert mp.num_iters <= mp.max_iters
91
90
  end
92
91
 
@@ -95,20 +94,20 @@ class TestCrossEntropy < MiniTest::Test
95
94
  # This is a max-cut problem.
96
95
  # We also do some smoothing.
97
96
  #
98
- def test_ce_deBoer_2
97
+ def test_ce_deboer_2
99
98
  NArray.srand(567) # must use NArray's generator, not Ruby's
100
99
 
101
100
  # Cost matrix
102
101
  n = 5
103
- c = NArray[[0,1,3,5,6],
104
- [1,0,3,6,5],
105
- [3,3,0,2,2],
106
- [5,6,2,0,2],
107
- [6,5,2,2,0]]
102
+ c = NArray[[0, 1, 3, 5, 6],
103
+ [1, 0, 3, 6, 5],
104
+ [3, 3, 0, 2, 2],
105
+ [5, 6, 2, 0, 2],
106
+ [6, 5, 2, 2, 0]]
108
107
 
109
108
  mp = CrossEntropy::MatrixProblem.new
110
- mp.params = NArray.float(2, n).fill!(0.5)
111
- mp.params[true,0] = NArray[0.0,1.0] # put vertex 0 in subset 1
109
+ mp.params = NArray.float(2, n).fill!(0.5)
110
+ mp.params[true, 0] = NArray[0.0, 1.0] # put vertex 0 in subset 1
112
111
  mp.num_samples = 50
113
112
  mp.num_elite = 5
114
113
  mp.max_iters = 10
@@ -116,25 +115,25 @@ class TestCrossEntropy < MiniTest::Test
116
115
 
117
116
  max_cut_score = proc do |sample|
118
117
  weight = 0
119
- for i in 0...n
120
- for j in 0...n
121
- weight += c[j,i] if sample[i] < sample[j]
118
+ (0...n).each do |i|
119
+ (0...n).each do |j|
120
+ weight += c[j, i] if sample[i] < sample[j]
122
121
  end
123
122
  end
124
123
  -weight # to be minimized
125
124
  end
126
- best_cut = NArray[1,1,0,0,0]
127
- assert_equal(-15, max_cut_score.call(NArray[1,0,0,0,0]))
125
+ best_cut = NArray[1, 1, 0, 0, 0]
126
+ assert_equal(-15, max_cut_score.call(NArray[1, 0, 0, 0, 0]))
128
127
  assert_equal(-28, max_cut_score.call(best_cut))
129
128
 
130
129
  mp.to_score_sample(&max_cut_score)
131
130
 
132
131
  mp.to_update do |pr_iter|
133
- smooth*pr_iter + (1 - smooth)*mp.params
132
+ smooth * pr_iter + (1 - smooth) * mp.params
134
133
  end
135
134
 
136
135
  mp.for_stop_decision do
137
- #p mp.params
136
+ # p mp.params
138
137
  mp.num_iters >= mp.max_iters
139
138
  end
140
139
 
@@ -146,4 +145,3 @@ class TestCrossEntropy < MiniTest::Test
146
145
  assert mp.num_iters <= mp.max_iters
147
146
  end
148
147
  end
149
-
@@ -0,0 +1,172 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative 'test_helper'
4
+
5
+ class TestNArrayExtensions < CrossEntropyTest
6
+ using CrossEntropy::NArrayExtensions
7
+
8
+ def test_array_cumsum
9
+ assert_equal NArray[], NArray[].cumsum
10
+ assert_equal NArray[0], NArray[0].cumsum
11
+ assert_equal NArray[1], NArray[1].cumsum
12
+ assert_equal NArray[0, 1], NArray[0, 1].cumsum
13
+ assert_equal NArray[1, 1], NArray[1, 0].cumsum
14
+ assert_equal NArray[1, 2], NArray[1, 1].cumsum
15
+ assert_equal NArray[1, 3, 6], NArray[1, 2, 3].cumsum
16
+ end
17
+
18
+ def test_narray_sample_pmf
19
+ # Sample from vector.
20
+ v = NArray.float(3).fill!(1)
21
+ v /= v.sum
22
+ assert_equal 0, v.sample_pmf_dim(0, NArray[0.0])
23
+ assert_equal 0, v.sample_pmf_dim(0, NArray[0.333])
24
+ assert_equal 1, v.sample_pmf_dim(0, NArray[0.334])
25
+ assert_equal 1, v.sample_pmf_dim(0, NArray[0.666])
26
+ assert_equal 2, v.sample_pmf_dim(0, NArray[0.667])
27
+ assert_equal 2, v.sample_pmf_dim(0, NArray[0.999])
28
+
29
+ # Sample from vector with sum < 1.
30
+ v = NArray[0.5, 0.2, 0.2]
31
+ assert_equal 0, v.sample_pmf_dim(0, NArray[0.0])
32
+ assert_equal 1, v.sample_pmf_dim(0, NArray[0.5])
33
+ assert_equal 2, v.sample_pmf_dim(0, NArray[0.89])
34
+ assert_equal 2, v.sample_pmf_dim(0, NArray[0.91])
35
+
36
+ # Zero at start won't be sampled.
37
+ v = NArray[0.0, 0.5, 0.5]
38
+ assert_equal 1, v.sample_pmf_dim(0, NArray[0.0])
39
+ assert_equal 1, v.sample_pmf_dim(0, NArray[0.1])
40
+ assert_equal 2, v.sample_pmf_dim(0, NArray[0.9])
41
+
42
+ # If all entries are zero, we just choose the last one arbitrarily.
43
+ v = NArray[0.0, 0.0, 0.0]
44
+ assert_equal 2, v.sample_pmf_dim(0, NArray[0.9])
45
+
46
+ # Sample from square matrix.
47
+ m = NArray.float(3, 3).fill!(1)
48
+ m /= 3
49
+ assert_equal \
50
+ NArray[0, 0, 0], m.sample_pmf_dim(0, NArray[[0.0], [0.0], [0.0]])
51
+ assert_equal \
52
+ NArray[1, 0, 0], m.sample_pmf_dim(0, NArray[[0.4], [0.0], [0.0]])
53
+ assert_equal \
54
+ NArray[1, 2, 0], m.sample_pmf_dim(0, NArray[[0.4], [0.7], [0.0]])
55
+ assert_equal NArray[0, 0, 0], m.sample_pmf_dim(1, NArray[[0.0, 0.0, 0.0]])
56
+ assert_equal NArray[1, 0, 0], m.sample_pmf_dim(1, NArray[[0.4, 0.0, 0.0]])
57
+ assert_equal NArray[1, 2, 0], m.sample_pmf_dim(1, NArray[[0.4, 0.7, 0.0]])
58
+
59
+ # Sample from non-square matrix.
60
+ m = NArray.float(3, 2).fill!(1)
61
+ m /= 3
62
+ assert_equal NArray[0, 0], m.sample_pmf_dim(0, NArray[[0.0], [0.0]])
63
+ assert_equal NArray[1, 0], m.sample_pmf_dim(0, NArray[[0.4], [0.0]])
64
+ assert_equal NArray[1, 2], m.sample_pmf_dim(0, NArray[[0.4], [0.7]])
65
+
66
+ m = m.transpose(1, 0)
67
+ assert_equal NArray[0, 0], m.sample_pmf_dim(1, NArray[[0.0, 0.0]])
68
+ assert_equal NArray[1, 0], m.sample_pmf_dim(1, NArray[[0.4, 0.0]])
69
+ assert_equal NArray[1, 2], m.sample_pmf_dim(1, NArray[[0.4, 0.7]])
70
+
71
+ # Sample from a 3D array.
72
+ a = NArray.float(4, 3, 2).fill!(1)
73
+ a /= 2
74
+ sa = a.sample_pmf_dim(2)
75
+ assert_equal 2, sa.dim
76
+ assert_equal [0, 1], sa.to_a.flatten.uniq.sort
77
+ end
78
+
79
+ def test_narray_index_to_subscript
80
+ assert_raises(IndexError) { NArray[].index_to_subscript(0) }
81
+
82
+ assert_equal [0], NArray[0].index_to_subscript(0)
83
+
84
+ assert_equal [0], NArray[0, 0].index_to_subscript(0)
85
+ assert_equal [1], NArray[0, 0].index_to_subscript(1)
86
+
87
+ assert_equal [0, 0], NArray[[0, 0]].index_to_subscript(0)
88
+ assert_equal [1, 0], NArray[[0, 0]].index_to_subscript(1)
89
+ assert_raises(IndexError) { NArray[[0, 0]].index_to_subscript(2) }
90
+ assert_raises(IndexError) { NArray[[0, 0]].index_to_subscript(3) }
91
+ assert_raises(IndexError) { NArray[[0, 0]].index_to_subscript(4) }
92
+
93
+ a = NArray.int(2, 2).indgen!
94
+ assert_equal [0, 0], a.index_to_subscript(0)
95
+ assert_equal [1, 0], a.index_to_subscript(1)
96
+ assert_equal [0, 1], a.index_to_subscript(2)
97
+ assert_equal [1, 1], a.index_to_subscript(3)
98
+ assert_raises(IndexError) { a.index_to_subscript(4) }
99
+
100
+ a = NArray.int(2, 3).indgen!
101
+ (0...2).each do |j|
102
+ (0...3).each do |i|
103
+ assert_equal [j, i], a.index_to_subscript(a[j, i])
104
+ end
105
+ end
106
+
107
+ a = NArray.int(3, 2).indgen!
108
+ (0...3).each do |j|
109
+ (0...2).each do |i|
110
+ assert_equal [j, i], a.index_to_subscript(a[j, i])
111
+ end
112
+ end
113
+
114
+ a = NArray.int(3, 2, 4).indgen!
115
+ (0...3).each do |j|
116
+ (0...2).each do |i|
117
+ (0...4).each do |h|
118
+ assert_equal [j, i, h], a.index_to_subscript(a[j, i, h])
119
+ end
120
+ end
121
+ end
122
+ end
123
+
124
+ def test_narray_sample
125
+ assert_equal [0], NArray[1.0].sample_pmf
126
+
127
+ assert_equal [0], NArray[0.5, 0.5].sample_pmf(NArray[0])
128
+ assert_equal [0], NArray[0.5, 0.5].sample_pmf(NArray[0.49])
129
+ assert_equal [1], NArray[0.5, 0.5].sample_pmf(NArray[0.5])
130
+ assert_equal [1], NArray[0.5, 0.5].sample_pmf(NArray[1.0])
131
+
132
+ a = NArray[[0.5, 0.5]]
133
+ assert_equal [0, 0], a.sample_pmf(NArray[0])
134
+ assert_equal [0, 0], a.sample_pmf(NArray[0.49])
135
+ assert_equal [1, 0], a.sample_pmf(NArray[0.5])
136
+ assert_equal [1, 0], a.sample_pmf(NArray[1.0])
137
+
138
+ a = NArray[[0.2, 0], [0.3, 0.2]]
139
+ assert_equal [0, 0], a.sample_pmf(NArray[0])
140
+ assert_equal [0, 0], a.sample_pmf(NArray[0.19])
141
+ assert_equal [0, 1], a.sample_pmf(NArray[0.2]) # note [1,0] has 0 mass
142
+ assert_equal [1, 1], a.sample_pmf(NArray[0.5])
143
+ assert_equal [1, 1], a.sample_pmf(NArray[0.51])
144
+
145
+ a = NArray[[[0, 0.2], [0.2, 0.2]], [[0.1, 0.1], [0.1, 0.1]]]
146
+ assert_equal [1, 0, 0], a.sample_pmf(NArray[0]) # note [0,0,0] has 0 mass
147
+ assert_equal [1, 0, 0], a.sample_pmf(NArray[0.1])
148
+ assert_equal [0, 1, 0], a.sample_pmf(NArray[0.21])
149
+ assert_equal [1, 1, 0], a.sample_pmf(NArray[0.41])
150
+ assert_equal [1, 1, 0], a.sample_pmf(NArray[0.59])
151
+ assert_equal [0, 0, 1], a.sample_pmf(NArray[0.61])
152
+ assert_equal [1, 0, 1], a.sample_pmf(NArray[0.71])
153
+ assert_equal [0, 1, 1], a.sample_pmf(NArray[0.81])
154
+ assert_equal [1, 1, 1], a.sample_pmf(NArray[0.91])
155
+ assert_equal [1, 1, 1], a.sample_pmf(NArray[1.0])
156
+ end
157
+
158
+ def test_sample_pmf_examples
159
+ a = NArray[[0.1, 0.2, 0.7],
160
+ [0.3, 0.5, 0.2],
161
+ [0.0, 0.2, 0.8],
162
+ [0.7, 0.1, 0.2]]
163
+ assert_equal [4], a.sample_pmf_dim(0).shape
164
+
165
+ assert_equal \
166
+ NArray[2, 1, 2, 0],
167
+ a.cumsum(0).sample_cdf_dim(0, NArray[[0.5], [0.5], [0.5], [0.5]])
168
+
169
+ a = NArray.float(3, 3, 3).fill!(1).div!(3 * 3 * 3)
170
+ assert_equal 3, a.sample_pmf.size
171
+ end
172
+ end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: cross_entropy
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.0
4
+ version: 1.1.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - John Lees-Miller
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2016-01-02 00:00:00.000000000 Z
11
+ date: 2017-05-06 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: narray
@@ -30,14 +30,42 @@ dependencies:
30
30
  requirements:
31
31
  - - "~>"
32
32
  - !ruby/object:Gem::Version
33
- version: '4.1'
33
+ version: '5.0'
34
34
  type: :development
35
35
  prerelease: false
36
36
  version_requirements: !ruby/object:Gem::Requirement
37
37
  requirements:
38
38
  - - "~>"
39
39
  - !ruby/object:Gem::Version
40
- version: '4.1'
40
+ version: '5.0'
41
+ - !ruby/object:Gem::Dependency
42
+ name: rubocop
43
+ requirement: !ruby/object:Gem::Requirement
44
+ requirements:
45
+ - - "~>"
46
+ - !ruby/object:Gem::Version
47
+ version: 0.48.0
48
+ type: :development
49
+ prerelease: false
50
+ version_requirements: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - "~>"
53
+ - !ruby/object:Gem::Version
54
+ version: 0.48.0
55
+ - !ruby/object:Gem::Dependency
56
+ name: simplecov
57
+ requirement: !ruby/object:Gem::Requirement
58
+ requirements:
59
+ - - "~>"
60
+ - !ruby/object:Gem::Version
61
+ version: 0.14.1
62
+ type: :development
63
+ prerelease: false
64
+ version_requirements: !ruby/object:Gem::Requirement
65
+ requirements:
66
+ - - "~>"
67
+ - !ruby/object:Gem::Version
68
+ version: 0.14.1
41
69
  description: Includes solvers for continuous and discrete multivariate optimisation
42
70
  problems.
43
71
  email:
@@ -58,6 +86,7 @@ files:
58
86
  - test/cross_entropy/beta_problem_test.rb
59
87
  - test/cross_entropy/continuous_problem_test.rb
60
88
  - test/cross_entropy/cross_entropy_test.rb
89
+ - test/cross_entropy/narray_extensions_test.rb
61
90
  homepage: https://github.com/jdleesmiller/cross_entropy
62
91
  licenses: []
63
92
  metadata: {}
@@ -66,7 +95,7 @@ rdoc_options:
66
95
  - "--main"
67
96
  - README.md
68
97
  - "--title"
69
- - cross_entropy-1.0.0 Documentation
98
+ - cross_entropy-1.1.0 Documentation
70
99
  require_paths:
71
100
  - lib
72
101
  required_ruby_version: !ruby/object:Gem::Requirement
@@ -81,7 +110,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
81
110
  version: '0'
82
111
  requirements: []
83
112
  rubyforge_project:
84
- rubygems_version: 2.4.6
113
+ rubygems_version: 2.6.12
85
114
  signing_key:
86
115
  specification_version: 4
87
116
  summary: Solve optimisation problems with the Cross Entropy Method.
@@ -89,4 +118,4 @@ test_files:
89
118
  - test/cross_entropy/beta_problem_test.rb
90
119
  - test/cross_entropy/continuous_problem_test.rb
91
120
  - test/cross_entropy/cross_entropy_test.rb
92
- has_rdoc:
121
+ - test/cross_entropy/narray_extensions_test.rb