svmkit 0.3.2 → 0.3.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 93ce9c2e79ac158b4a3e988afc547b1891419eb6e6b1845156cf98eaa3cdd578
4
- data.tar.gz: 4e677653deebd035cbdcd5c98529b7f4fee6804075ecab1113dbccc0bf9c65ed
3
+ metadata.gz: 9c0a64cc46c00a252946033b072b4d9498fb4d5cf7131830333483a336c29315
4
+ data.tar.gz: 1eb9415f08167772764f1eba4e67f6a3479768db75efcc32a8de85276440d41c
5
5
  SHA512:
6
- metadata.gz: 7518039557e3c991c4a0cc112764198ed6340c8be1fa9c3fb746be21ffbb5518dd35651149cda5aba8ef52a36dfa6b17f47e1335893ae0cd1dfc5776a0e6bf8e
7
- data.tar.gz: c062d9c2a7c04be82787a4d76a970855c2dc8ce0d4bf5531b6196b96873f4f8e679ee434af9a56c2ebf56ae9a8adb33387925050eb9b8964da613318f2a0e430
6
+ metadata.gz: 2f994dad593e5b752c2a062507f849483a9e4dbdd90190313b672c2f8cd9c9ed102b2fc088823665812f44e6b549bc67cab7d16fb545031bae0b57e713c3c3c3
7
+ data.tar.gz: d6da2f56721b8d264898fea922e2bee016987d898b4169f12bc6963044b69a4952f25d1b75380a93ce1fccd9854bb86e085c18fdf9535e273bd2eb1a328d3b98
data/HISTORY.md CHANGED
@@ -1,3 +1,9 @@
1
+ # 0.3.3
2
+ - Add class for Ridge regressor.
3
+ - Add class for Lasso regressor.
4
+ - Fix bug on gradient calculation of FactorizationMachineRegressor.
5
+ - Fix some documents.
6
+
1
7
  # 0.3.2
2
8
  - Add class for Factorization Machine regressor.
3
9
  - Add class for Decision Tree regressor.
data/README.md CHANGED
@@ -8,7 +8,7 @@
8
8
  SVMKit is a machine learninig library in Ruby.
9
9
  SVMKit provides machine learning algorithms with interfaces similar to Scikit-Learn in Python.
10
10
  SVMKit currently supports Linear / Kernel Support Vector Machine,
11
- Logistic Regression, Factorization Machine, Naive Bayes, Decision Tree, Random Forest,
11
+ Logistic Regression, Ridge, Lasso, Factorization Machine, Naive Bayes, Decision Tree, Random Forest,
12
12
  K-nearest neighbor classifier, and cross-validation.
13
13
 
14
14
  ## Installation
@@ -17,6 +17,8 @@ require 'svmkit/kernel_approximation/rbf'
17
17
  require 'svmkit/linear_model/svc'
18
18
  require 'svmkit/linear_model/svr'
19
19
  require 'svmkit/linear_model/logistic_regression'
20
+ require 'svmkit/linear_model/ridge'
21
+ require 'svmkit/linear_model/lasso'
20
22
  require 'svmkit/kernel_machine/kernel_svc'
21
23
  require 'svmkit/polynomial_model/factorization_machine_classifier'
22
24
  require 'svmkit/polynomial_model/factorization_machine_regressor'
@@ -155,7 +155,7 @@ module SVMKit
155
155
  end
156
156
 
157
157
  # Dump marshal data.
158
- # @return [Hash] The marshal data about RandomForestClassifier
158
+ # @return [Hash] The marshal data about RandomForestClassifier.
159
159
  def marshal_dump
160
160
  { params: @params, estimators: @estimators, classes: @classes,
161
161
  feature_importances: @feature_importances, rng: @rng }
@@ -1,6 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require 'pp'
4
3
  require 'svmkit/validation'
5
4
  require 'svmkit/base/base_estimator'
6
5
  require 'svmkit/base/regressor'
@@ -119,7 +118,7 @@ module SVMKit
119
118
  end
120
119
 
121
120
  # Dump marshal data.
122
- # @return [Hash] The marshal data about RandomForestRegressor
121
+ # @return [Hash] The marshal data about RandomForestRegressor.
123
122
  def marshal_dump
124
123
  { params: @params,
125
124
  estimators: @estimators,
@@ -0,0 +1,199 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'svmkit/validation'
4
+ require 'svmkit/base/base_estimator'
5
+ require 'svmkit/base/regressor'
6
+
7
+ module SVMKit
8
+ module LinearModel
9
+ # Lasso is a class that implements Lasso Regression
10
+ # with stochastic gradient descent (SGD) optimization.
11
+ #
12
+ # @example
13
+ # estimator =
14
+ # SVMKit::LinearModel::Lasso.new(reg_param: 0.1, max_iter: 5000, batch_size: 50, random_seed: 1)
15
+ # estimator.fit(training_samples, traininig_values)
16
+ # results = estimator.predict(testing_samples)
17
+ #
18
+ # *Reference*
19
+ # - S. Shalev-Shwartz and Y. Singer, "Pegasos: Primal Estimated sub-GrAdient SOlver for SVM," Proc. ICML'07, pp. 807--814, 2007.
20
+ # - L. Bottou, "Large-Scale Machine Learning with Stochastic Gradient Descent," Proc. COMPSTAT'10, pp. 177--186, 2010.
21
+ # - I. Sutskever, J. Martens, G. Dahl, and G. Hinton, "On the importance of initialization and momentum in deep learning," Proc. ICML'13, pp. 1139--1147, 2013.
22
+ # - G. Hinton, N. Srivastava, and K. Swersky, "Lecture 6e rmsprop," Neural Networks for Machine Learning, 2012.
23
+ class Lasso
24
+ include Base::BaseEstimator
25
+ include Base::Regressor
26
+ include Validation
27
+
28
+ # Return the weight vector.
29
+ # @return [Numo::DFloat] (shape: [n_outputs, n_features])
30
+ attr_reader :weight_vec
31
+
32
+ # Return the bias term (a.k.a. intercept).
33
+ # @return [Numo::DFloat] (shape: [n_outputs])
34
+ attr_reader :bias_term
35
+
36
+ # Return the random generator for random sampling.
37
+ # @return [Random]
38
+ attr_reader :rng
39
+
40
+ # Create a new Lasso regressor.
41
+ #
42
+ # @param reg_param [Float] The regularization parameter.
43
+ # @param fit_bias [Boolean] The flag indicating whether to fit the bias term.
44
+ # @param learning_rate [Float] The learning rate for optimization.
45
+ # @param decay [Float] The discounting factor for RMS prop optimization.
46
+ # @param momentum [Float] The momentum for optimization.
47
+ # @param max_iter [Integer] The maximum number of iterations.
48
+ # @param batch_size [Integer] The size of the mini batches.
49
+ # @param random_seed [Integer] The seed value using to initialize the random generator.
50
+ def initialize(reg_param: 1.0, fit_bias: false, learning_rate: 0.01, decay: 0.9, momentum: 0.9,
51
+ max_iter: 1000, batch_size: 10, random_seed: nil)
52
+ check_params_float(reg_param: reg_param,
53
+ learning_rate: learning_rate, decay: decay, momentum: momentum)
54
+ check_params_integer(max_iter: max_iter, batch_size: batch_size)
55
+ check_params_boolean(fit_bias: fit_bias)
56
+ check_params_type_or_nil(Integer, random_seed: random_seed)
57
+ check_params_positive(reg_param: reg_param,
58
+ learning_rate: learning_rate, decay: decay, momentum: momentum,
59
+ max_iter: max_iter, batch_size: batch_size)
60
+ @params = {}
61
+ @params[:reg_param] = reg_param
62
+ @params[:fit_bias] = fit_bias
63
+ @params[:learning_rate] = learning_rate
64
+ @params[:decay] = decay
65
+ @params[:momentum] = momentum
66
+ @params[:max_iter] = max_iter
67
+ @params[:batch_size] = batch_size
68
+ @params[:random_seed] = random_seed
69
+ @params[:random_seed] ||= srand
70
+ @weight_vec = nil
71
+ @bias_term = nil
72
+ @rng = Random.new(@params[:random_seed])
73
+ end
74
+
75
+ # Fit the model with given training data.
76
+ #
77
+ # @param x [Numo::DFloat] (shape: [n_samples, n_features]) The training data to be used for fitting the model.
78
+ # @param y [Numo::Int32] (shape: [n_samples, n_outputs]) The target values to be used for fitting the model.
79
+ # @return [Lasso] The learned regressor itself.
80
+ def fit(x, y)
81
+ check_sample_array(x)
82
+ check_tvalue_array(y)
83
+ check_sample_tvalue_size(x, y)
84
+
85
+ n_outputs = y.shape[1].nil? ? 1 : y.shape[1]
86
+ _n_samples, n_features = x.shape
87
+
88
+ if n_outputs > 1
89
+ @weight_vec = Numo::DFloat.zeros(n_outputs, n_features)
90
+ @bias_term = Numo::DFloat.zeros(n_outputs)
91
+ n_outputs.times do |n|
92
+ weight, bias = single_fit(x, y[true, n])
93
+ @weight_vec[n, true] = weight
94
+ @bias_term[n] = bias
95
+ end
96
+ else
97
+ @weight_vec, @bias_term = single_fit(x, y)
98
+ end
99
+
100
+ self
101
+ end
102
+
103
+ # Predict values for samples.
104
+ #
105
+ # @param x [Numo::DFloat] (shape: [n_samples, n_features]) The samples to predict the values.
106
+ # @return [Numo::DFloat] (shape: [n_samples, n_outputs]) Predicted values per sample.
107
+ def predict(x)
108
+ check_sample_array(x)
109
+ x.dot(@weight_vec.transpose) + @bias_term
110
+ end
111
+
112
+ # Dump marshal data.
113
+ # @return [Hash] The marshal data about Lasso.
114
+ def marshal_dump
115
+ { params: @params,
116
+ weight_vec: @weight_vec,
117
+ bias_term: @bias_term,
118
+ rng: @rng }
119
+ end
120
+
121
+ # Load marshal data.
122
+ # @return [nil]
123
+ def marshal_load(obj)
124
+ @params = obj[:params]
125
+ @weight_vec = obj[:weight_vec]
126
+ @bias_term = obj[:bias_term]
127
+ @rng = obj[:rng]
128
+ nil
129
+ end
130
+
131
+ private
132
+
133
+ def single_fit(x, y)
134
+ # Expand feature vectors for bias term.
135
+ samples = @params[:fit_bias] ? expand_feature(x) : x
136
+ # Initialize some variables.
137
+ n_samples, n_features = samples.shape
138
+ rand_ids = [*0...n_samples].shuffle(random: @rng)
139
+ weight_vec = Numo::DFloat.zeros(n_features)
140
+ left_weight_vec = Numo::DFloat.zeros(n_features)
141
+ left_weight_sqrsum = Numo::DFloat.zeros(n_features)
142
+ left_weight_update = Numo::DFloat.zeros(n_features)
143
+ right_weight_vec = Numo::DFloat.zeros(n_features)
144
+ right_weight_sqrsum = Numo::DFloat.zeros(n_features)
145
+ right_weight_update = Numo::DFloat.zeros(n_features)
146
+ # Start optimization.
147
+ @params[:max_iter].times do |_t|
148
+ # Random sampling.
149
+ subset_ids = rand_ids.shift(@params[:batch_size])
150
+ rand_ids.concat(subset_ids)
151
+ data = samples[subset_ids, true]
152
+ values = y[subset_ids]
153
+ # Calculate gradients for loss function.
154
+ loss_grad = loss_gradient(data, values, weight_vec)
155
+ next if loss_grad.ne(0.0).count.zero?
156
+ # Update weight.
157
+ left_weight_vec, left_weight_sqrsum, left_weight_update =
158
+ update_weight(left_weight_vec, left_weight_sqrsum, left_weight_update,
159
+ left_weight_gradient(loss_grad, data))
160
+ right_weight_vec, right_weight_sqrsum, right_weight_update =
161
+ update_weight(right_weight_vec, right_weight_sqrsum, right_weight_update,
162
+ right_weight_gradient(loss_grad, data))
163
+ weight_vec = left_weight_vec - right_weight_vec
164
+ end
165
+ split_weight_vec_bias(weight_vec)
166
+ end
167
+
168
+ def loss_gradient(x, y, weight)
169
+ 2.0 * (x.dot(weight) - y)
170
+ end
171
+
172
+ def left_weight_gradient(loss_grad, data)
173
+ ((@params[:reg_param] + loss_grad).expand_dims(1) * data).mean(0)
174
+ end
175
+
176
+ def right_weight_gradient(loss_grad, data)
177
+ ((@params[:reg_param] - loss_grad).expand_dims(1) * data).mean(0)
178
+ end
179
+
180
+ def update_weight(weight, sqrsum, update, gr)
181
+ new_sqrsum = @params[:decay] * sqrsum + (1.0 - @params[:decay]) * gr**2
182
+ new_update = (@params[:learning_rate] / ((new_sqrsum + 1.0e-8)**0.5)) * gr
183
+ new_weight = weight - (new_update + @params[:momentum] * update)
184
+ new_weight = 0.5 * (new_weight + new_weight.abs)
185
+ [new_weight, new_sqrsum, new_update]
186
+ end
187
+
188
+ def expand_feature(x)
189
+ Numo::NArray.hstack([x, Numo::DFloat.ones([x.shape[0], 1])])
190
+ end
191
+
192
+ def split_weight_vec_bias(weight_vec)
193
+ weights = @params[:fit_bias] ? weight_vec[0...-1] : weight_vec
194
+ bias = @params[:fit_bias] ? weight_vec[-1] : 0.0
195
+ [weights, bias]
196
+ end
197
+ end
198
+ end
199
+ end
@@ -0,0 +1,185 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'svmkit/validation'
4
+ require 'svmkit/base/base_estimator'
5
+ require 'svmkit/base/regressor'
6
+
7
+ module SVMKit
8
+ module LinearModel
9
+ # Ridge is a class that implements Ridge Regression
10
+ # with stochastic gradient descent (SGD) optimization.
11
+ #
12
+ # @example
13
+ # estimator =
14
+ # SVMKit::LinearModel::Ridge.new(reg_param: 0.1, max_iter: 5000, batch_size: 50, random_seed: 1)
15
+ # estimator.fit(training_samples, traininig_values)
16
+ # results = estimator.predict(testing_samples)
17
+ #
18
+ # *Reference*
19
+ # - S. Shalev-Shwartz and Y. Singer, "Pegasos: Primal Estimated sub-GrAdient SOlver for SVM," Proc. ICML'07, pp. 807--814, 2007.
20
+ # - I. Sutskever, J. Martens, G. Dahl, and G. Hinton, "On the importance of initialization and momentum in deep learning," Proc. ICML'13, pp. 1139--1147, 2013.
21
+ # - G. Hinton, N. Srivastava, and K. Swersky, "Lecture 6e rmsprop," Neural Networks for Machine Learning, 2012.
22
+ class Ridge
23
+ include Base::BaseEstimator
24
+ include Base::Regressor
25
+ include Validation
26
+
27
+ # Return the weight vector.
28
+ # @return [Numo::DFloat] (shape: [n_outputs, n_features])
29
+ attr_reader :weight_vec
30
+
31
+ # Return the bias term (a.k.a. intercept).
32
+ # @return [Numo::DFloat] (shape: [n_outputs])
33
+ attr_reader :bias_term
34
+
35
+ # Return the random generator for random sampling.
36
+ # @return [Random]
37
+ attr_reader :rng
38
+
39
+ # Create a new Ridge regressor.
40
+ #
41
+ # @param reg_param [Float] The regularization parameter.
42
+ # @param fit_bias [Boolean] The flag indicating whether to fit the bias term.
43
+ # @param learning_rate [Float] The learning rate for optimization.
44
+ # @param decay [Float] The discounting factor for RMS prop optimization.
45
+ # @param momentum [Float] The Nesterov momentum for optimization.
46
+ # @param max_iter [Integer] The maximum number of iterations.
47
+ # @param batch_size [Integer] The size of the mini batches.
48
+ # @param random_seed [Integer] The seed value using to initialize the random generator.
49
+ def initialize(reg_param: 1.0, fit_bias: false, learning_rate: 0.01, decay: 0.9, momentum: 0.9,
50
+ max_iter: 1000, batch_size: 10, random_seed: nil)
51
+ check_params_float(reg_param: reg_param,
52
+ learning_rate: learning_rate, decay: decay, momentum: momentum)
53
+ check_params_integer(max_iter: max_iter, batch_size: batch_size)
54
+ check_params_boolean(fit_bias: fit_bias)
55
+ check_params_type_or_nil(Integer, random_seed: random_seed)
56
+ check_params_positive(reg_param: reg_param,
57
+ learning_rate: learning_rate, decay: decay, momentum: momentum,
58
+ max_iter: max_iter, batch_size: batch_size)
59
+ @params = {}
60
+ @params[:reg_param] = reg_param
61
+ @params[:fit_bias] = fit_bias
62
+ @params[:learning_rate] = learning_rate
63
+ @params[:decay] = decay
64
+ @params[:momentum] = momentum
65
+ @params[:max_iter] = max_iter
66
+ @params[:batch_size] = batch_size
67
+ @params[:random_seed] = random_seed
68
+ @params[:random_seed] ||= srand
69
+ @weight_vec = nil
70
+ @bias_term = nil
71
+ @rng = Random.new(@params[:random_seed])
72
+ end
73
+
74
+ # Fit the model with given training data.
75
+ #
76
+ # @param x [Numo::DFloat] (shape: [n_samples, n_features]) The training data to be used for fitting the model.
77
+ # @param y [Numo::Int32] (shape: [n_samples, n_outputs]) The target values to be used for fitting the model.
78
+ # @return [Ridge] The learned regressor itself.
79
+ def fit(x, y)
80
+ check_sample_array(x)
81
+ check_tvalue_array(y)
82
+ check_sample_tvalue_size(x, y)
83
+
84
+ n_outputs = y.shape[1].nil? ? 1 : y.shape[1]
85
+ _n_samples, n_features = x.shape
86
+
87
+ if n_outputs > 1
88
+ @weight_vec = Numo::DFloat.zeros(n_outputs, n_features)
89
+ @bias_term = Numo::DFloat.zeros(n_outputs)
90
+ n_outputs.times do |n|
91
+ weight, bias = single_fit(x, y[true, n])
92
+ @weight_vec[n, true] = weight
93
+ @bias_term[n] = bias
94
+ end
95
+ else
96
+ @weight_vec, @bias_term = single_fit(x, y)
97
+ end
98
+
99
+ self
100
+ end
101
+
102
+ # Predict values for samples.
103
+ #
104
+ # @param x [Numo::DFloat] (shape: [n_samples, n_features]) The samples to predict the values.
105
+ # @return [Numo::DFloat] (shape: [n_samples, n_outputs]) Predicted values per sample.
106
+ def predict(x)
107
+ check_sample_array(x)
108
+ x.dot(@weight_vec.transpose) + @bias_term
109
+ end
110
+
111
+ # Dump marshal data.
112
+ # @return [Hash] The marshal data about Ridge.
113
+ def marshal_dump
114
+ { params: @params,
115
+ weight_vec: @weight_vec,
116
+ bias_term: @bias_term,
117
+ rng: @rng }
118
+ end
119
+
120
+ # Load marshal data.
121
+ # @return [nil]
122
+ def marshal_load(obj)
123
+ @params = obj[:params]
124
+ @weight_vec = obj[:weight_vec]
125
+ @bias_term = obj[:bias_term]
126
+ @rng = obj[:rng]
127
+ nil
128
+ end
129
+
130
+ private
131
+
132
+ def single_fit(x, y)
133
+ # Expand feature vectors for bias term.
134
+ samples = @params[:fit_bias] ? expand_feature(x) : x
135
+ # Initialize some variables.
136
+ n_samples, n_features = samples.shape
137
+ rand_ids = [*0...n_samples].shuffle(random: @rng)
138
+ weight_vec = Numo::DFloat.zeros(n_features)
139
+ weight_sqrsum = Numo::DFloat.zeros(n_features)
140
+ weight_update = Numo::DFloat.zeros(n_features)
141
+ # Start optimization.
142
+ @params[:max_iter].times do |_t|
143
+ # Random sampling.
144
+ subset_ids = rand_ids.shift(@params[:batch_size])
145
+ rand_ids.concat(subset_ids)
146
+ data = samples[subset_ids, true]
147
+ values = y[subset_ids]
148
+ # Calculate gradients for loss function.
149
+ loss_grad = loss_gradient(data, values, weight_vec - @params[:momentum] * weight_update)
150
+ next if loss_grad.ne(0.0).count.zero?
151
+ # Update weight.
152
+ weight_vec, weight_sqrsum, weight_update =
153
+ update_weight(weight_vec, weight_sqrsum, weight_update,
154
+ weight_gradient(loss_grad, data, weight_vec - @params[:momentum] * weight_update))
155
+ end
156
+ split_weight_vec_bias(weight_vec)
157
+ end
158
+
159
+ def loss_gradient(x, y, weight)
160
+ 2.0 * (x.dot(weight) - y)
161
+ end
162
+
163
+ def weight_gradient(loss_grad, data, weight)
164
+ (loss_grad.expand_dims(1) * data).mean(0) + @params[:reg_param] * weight
165
+ end
166
+
167
+ def update_weight(weight, sqrsum, update, gr)
168
+ new_sqrsum = @params[:decay] * sqrsum + (1.0 - @params[:decay]) * gr**2
169
+ new_update = (@params[:learning_rate] / ((new_sqrsum + 1.0e-8)**0.5)) * gr
170
+ new_weight = weight - (new_update + @params[:momentum] * update)
171
+ [new_weight, new_sqrsum, new_update]
172
+ end
173
+
174
+ def expand_feature(x)
175
+ Numo::NArray.hstack([x, Numo::DFloat.ones([x.shape[0], 1])])
176
+ end
177
+
178
+ def split_weight_vec_bias(weight_vec)
179
+ weights = @params[:fit_bias] ? weight_vec[0...-1] : weight_vec
180
+ bias = @params[:fit_bias] ? weight_vec[-1] : 0.0
181
+ [weights, bias]
182
+ end
183
+ end
184
+ end
185
+ end
@@ -21,11 +21,11 @@ module SVMKit
21
21
  include Base::BaseEstimator
22
22
  include Base::Regressor
23
23
 
24
- # Return the weight vector for SVC.
24
+ # Return the weight vector for SVR.
25
25
  # @return [Numo::DFloat] (shape: [n_outputs, n_features])
26
26
  attr_reader :weight_vec
27
27
 
28
- # Return the bias term (a.k.a. intercept) for SVC.
28
+ # Return the bias term (a.k.a. intercept) for SVR.
29
29
  # @return [Numo::DFloat] (shape: [n_outputs])
30
30
  attr_reader :bias_term
31
31
 
@@ -104,7 +104,7 @@ module SVMKit
104
104
  end
105
105
 
106
106
  # Dump marshal data.
107
- # @return [Hash] The marshal data about SVC.
107
+ # @return [Hash] The marshal data about SVR.
108
108
  def marshal_dump
109
109
  { params: @params,
110
110
  weight_vec: @weight_vec,
@@ -164,7 +164,7 @@ module SVMKit
164
164
  end
165
165
 
166
166
  # Dump marshal data.
167
- # @return [Hash] The marshal data about FactorizationMachineClassifier
167
+ # @return [Hash] The marshal data about FactorizationMachineClassifier.
168
168
  def marshal_dump
169
169
  { params: @params,
170
170
  factor_mat: @factor_mat,
@@ -134,7 +134,7 @@ module SVMKit
134
134
  end
135
135
 
136
136
  # Dump marshal data.
137
- # @return [Hash] The marshal data about FactorizationMachineRegressor
137
+ # @return [Hash] The marshal data about FactorizationMachineRegressor.
138
138
  def marshal_dump
139
139
  { params: @params,
140
140
  factor_mat: @factor_mat,
@@ -177,7 +177,10 @@ module SVMKit
177
177
  data = x[subset_ids, true]
178
178
  values = y[subset_ids]
179
179
  # Calculate gradients for loss function.
180
- loss_grad = loss_gradient(data, values, factor_mat, weight_vec, bias_term)
180
+ loss_grad = loss_gradient(data, values,
181
+ factor_mat - @params[:momentum] * factor_update,
182
+ weight_vec - @params[:momentum] * weight_update,
183
+ bias_term - @params[:momentum] * bias_update)
181
184
  next if loss_grad.ne(0.0).count.zero?
182
185
  # Update each parameter.
183
186
  bias_term, bias_sqrsum, bias_update =
@@ -3,5 +3,5 @@
3
3
  # SVMKit is a machine learning library in Ruby.
4
4
  module SVMKit
5
5
  # @!visibility private
6
- VERSION = '0.3.2'.freeze
6
+ VERSION = '0.3.3'.freeze
7
7
  end
@@ -17,7 +17,7 @@ MSG
17
17
  SVMKit is a machine learninig library in Ruby.
18
18
  SVMKit provides machine learning algorithms with interfaces similar to Scikit-Learn in Python.
19
19
  SVMKit currently supports Linear / Kernel Support Vector Machine,
20
- Logistic Regression, Factorization Machine, Naive Bayes, Decision Tree, Random Forest,
20
+ Logistic Regression, Ridge, Lasso, Factorization Machine, Naive Bayes, Decision Tree, Random Forest,
21
21
  K-nearest neighbor algorithm, and cross-validation.
22
22
  MSG
23
23
  spec.homepage = 'https://github.com/yoshoku/svmkit'
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: svmkit
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.2
4
+ version: 0.3.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - yoshoku
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2018-05-23 00:00:00.000000000 Z
11
+ date: 2018-05-25 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: numo-narray
@@ -84,7 +84,7 @@ description: |
84
84
  SVMKit is a machine learninig library in Ruby.
85
85
  SVMKit provides machine learning algorithms with interfaces similar to Scikit-Learn in Python.
86
86
  SVMKit currently supports Linear / Kernel Support Vector Machine,
87
- Logistic Regression, Factorization Machine, Naive Bayes, Decision Tree, Random Forest,
87
+ Logistic Regression, Ridge, Lasso, Factorization Machine, Naive Bayes, Decision Tree, Random Forest,
88
88
  K-nearest neighbor algorithm, and cross-validation.
89
89
  email:
90
90
  - yoshoku@outlook.com
@@ -127,7 +127,9 @@ files:
127
127
  - lib/svmkit/evaluation_measure/recall.rb
128
128
  - lib/svmkit/kernel_approximation/rbf.rb
129
129
  - lib/svmkit/kernel_machine/kernel_svc.rb
130
+ - lib/svmkit/linear_model/lasso.rb
130
131
  - lib/svmkit/linear_model/logistic_regression.rb
132
+ - lib/svmkit/linear_model/ridge.rb
131
133
  - lib/svmkit/linear_model/svc.rb
132
134
  - lib/svmkit/linear_model/svr.rb
133
135
  - lib/svmkit/model_selection/cross_validation.rb