svmkit 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/.gitignore +12 -0
- data/.rspec +2 -0
- data/.rubocop.yml +17 -0
- data/.travis.yml +5 -0
- data/CODE_OF_CONDUCT.md +74 -0
- data/Gemfile +6 -0
- data/HISTORY.md +8 -0
- data/LICENSE.txt +23 -0
- data/README.md +84 -0
- data/Rakefile +6 -0
- data/bin/console +14 -0
- data/bin/setup +8 -0
- data/lib/svmkit.rb +16 -0
- data/lib/svmkit/base/base_estimator.rb +11 -0
- data/lib/svmkit/base/classifier.rb +22 -0
- data/lib/svmkit/base/transformer.rb +17 -0
- data/lib/svmkit/kernel_approximation/rbf.rb +133 -0
- data/lib/svmkit/linear_model/pegasos_svc.rb +148 -0
- data/lib/svmkit/multiclass/one_vs_rest_classifier.rb +127 -0
- data/lib/svmkit/preprocessing/l2_normalizer.rb +57 -0
- data/lib/svmkit/preprocessing/min_max_scaler.rb +99 -0
- data/lib/svmkit/preprocessing/standard_scaler.rb +87 -0
- data/lib/svmkit/utils.rb +33 -0
- data/lib/svmkit/version.rb +3 -0
- data/svmkit.gemspec +37 -0
- metadata +128 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA1:
|
3
|
+
metadata.gz: 62fdc5c03a044a7625bf2374159cf84ef32a6869
|
4
|
+
data.tar.gz: 4cd1c86a344cd1410a3a5c0f4bdebd04d80e7e7b
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: 1b704e536e183f881e6f16895ccdc1620dc8b694db7b44772db669e579ed07652df16c8de88794c65b5eeca0eeb805c415f1e44c36446cad3bdd230f3354b320
|
7
|
+
data.tar.gz: e25ca447621cef29ea1807168cbe6e7210308549a298db7c8797d54e127bfb1e7fe7de3c3e9a9d719cee6de1100705e352a3b76a5127282441027fd1b389e2e1
|
data/.gitignore
ADDED
data/.rspec
ADDED
data/.rubocop.yml
ADDED
data/.travis.yml
ADDED
data/CODE_OF_CONDUCT.md
ADDED
@@ -0,0 +1,74 @@
|
|
1
|
+
# Contributor Covenant Code of Conduct
|
2
|
+
|
3
|
+
## Our Pledge
|
4
|
+
|
5
|
+
In the interest of fostering an open and welcoming environment, we as
|
6
|
+
contributors and maintainers pledge to making participation in our project and
|
7
|
+
our community a harassment-free experience for everyone, regardless of age, body
|
8
|
+
size, disability, ethnicity, gender identity and expression, level of experience,
|
9
|
+
nationality, personal appearance, race, religion, or sexual identity and
|
10
|
+
orientation.
|
11
|
+
|
12
|
+
## Our Standards
|
13
|
+
|
14
|
+
Examples of behavior that contributes to creating a positive environment
|
15
|
+
include:
|
16
|
+
|
17
|
+
* Using welcoming and inclusive language
|
18
|
+
* Being respectful of differing viewpoints and experiences
|
19
|
+
* Gracefully accepting constructive criticism
|
20
|
+
* Focusing on what is best for the community
|
21
|
+
* Showing empathy towards other community members
|
22
|
+
|
23
|
+
Examples of unacceptable behavior by participants include:
|
24
|
+
|
25
|
+
* The use of sexualized language or imagery and unwelcome sexual attention or
|
26
|
+
advances
|
27
|
+
* Trolling, insulting/derogatory comments, and personal or political attacks
|
28
|
+
* Public or private harassment
|
29
|
+
* Publishing others' private information, such as a physical or electronic
|
30
|
+
address, without explicit permission
|
31
|
+
* Other conduct which could reasonably be considered inappropriate in a
|
32
|
+
professional setting
|
33
|
+
|
34
|
+
## Our Responsibilities
|
35
|
+
|
36
|
+
Project maintainers are responsible for clarifying the standards of acceptable
|
37
|
+
behavior and are expected to take appropriate and fair corrective action in
|
38
|
+
response to any instances of unacceptable behavior.
|
39
|
+
|
40
|
+
Project maintainers have the right and responsibility to remove, edit, or
|
41
|
+
reject comments, commits, code, wiki edits, issues, and other contributions
|
42
|
+
that are not aligned to this Code of Conduct, or to ban temporarily or
|
43
|
+
permanently any contributor for other behaviors that they deem inappropriate,
|
44
|
+
threatening, offensive, or harmful.
|
45
|
+
|
46
|
+
## Scope
|
47
|
+
|
48
|
+
This Code of Conduct applies both within project spaces and in public spaces
|
49
|
+
when an individual is representing the project or its community. Examples of
|
50
|
+
representing a project or community include using an official project e-mail
|
51
|
+
address, posting via an official social media account, or acting as an appointed
|
52
|
+
representative at an online or offline event. Representation of a project may be
|
53
|
+
further defined and clarified by project maintainers.
|
54
|
+
|
55
|
+
## Enforcement
|
56
|
+
|
57
|
+
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
58
|
+
reported by contacting the project team at yoshoku@outlook.com. All
|
59
|
+
complaints will be reviewed and investigated and will result in a response that
|
60
|
+
is deemed necessary and appropriate to the circumstances. The project team is
|
61
|
+
obligated to maintain confidentiality with regard to the reporter of an incident.
|
62
|
+
Further details of specific enforcement policies may be posted separately.
|
63
|
+
|
64
|
+
Project maintainers who do not follow or enforce the Code of Conduct in good
|
65
|
+
faith may face temporary or permanent repercussions as determined by other
|
66
|
+
members of the project's leadership.
|
67
|
+
|
68
|
+
## Attribution
|
69
|
+
|
70
|
+
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
|
71
|
+
available at [http://contributor-covenant.org/version/1/4][version]
|
72
|
+
|
73
|
+
[homepage]: http://contributor-covenant.org
|
74
|
+
[version]: http://contributor-covenant.org/version/1/4/
|
data/Gemfile
ADDED
data/HISTORY.md
ADDED
@@ -0,0 +1,8 @@
|
|
1
|
+
# 0.1.0
|
2
|
+
- Added basic classes.
|
3
|
+
- Added an utility module.
|
4
|
+
- Added class for RBF kernel approximation.
|
5
|
+
- Added class for Support Vector Machine with Pegasos alogrithm.
|
6
|
+
- Added class that performs mutlclass classification with one-vs.-rest strategy.
|
7
|
+
- Added classes for preprocessing such as min-max scaling, standardization, and L2 normalization.
|
8
|
+
|
data/LICENSE.txt
ADDED
@@ -0,0 +1,23 @@
|
|
1
|
+
Copyright (c) 2017 yoshoku
|
2
|
+
All rights reserved.
|
3
|
+
|
4
|
+
Redistribution and use in source and binary forms, with or without
|
5
|
+
modification, are permitted provided that the following conditions are met:
|
6
|
+
|
7
|
+
* Redistributions of source code must retain the above copyright notice, this
|
8
|
+
list of conditions and the following disclaimer.
|
9
|
+
|
10
|
+
* Redistributions in binary form must reproduce the above copyright notice,
|
11
|
+
this list of conditions and the following disclaimer in the documentation
|
12
|
+
and/or other materials provided with the distribution.
|
13
|
+
|
14
|
+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
15
|
+
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
16
|
+
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
17
|
+
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
|
18
|
+
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
19
|
+
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
20
|
+
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
21
|
+
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
|
22
|
+
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
23
|
+
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
data/README.md
ADDED
@@ -0,0 +1,84 @@
|
|
1
|
+
# SVMKit
|
2
|
+
|
3
|
+
SVMKit is a library for machine learninig in Ruby.
|
4
|
+
SVMKit implements machine learning algorithms with an interface similar to Scikit-Learn in Python.
|
5
|
+
However, since SVMKit is an experimental library, there are few machine learning algorithms implemented.
|
6
|
+
|
7
|
+
## Installation
|
8
|
+
|
9
|
+
Add this line to your application's Gemfile:
|
10
|
+
|
11
|
+
```ruby
|
12
|
+
gem 'svmkit'
|
13
|
+
```
|
14
|
+
|
15
|
+
And then execute:
|
16
|
+
|
17
|
+
$ bundle
|
18
|
+
|
19
|
+
Or install it yourself as:
|
20
|
+
|
21
|
+
$ gem install svmkit
|
22
|
+
|
23
|
+
## Usage
|
24
|
+
|
25
|
+
Training phase:
|
26
|
+
```ruby
|
27
|
+
require 'svmkit'
|
28
|
+
require 'libsvmloader'
|
29
|
+
|
30
|
+
samples, labels = LibSVMLoader.load_libsvm_file('pendigits', stype: :dense)
|
31
|
+
|
32
|
+
normalizer = SVMKit::Preprocessing::MinMaxScaler.new
|
33
|
+
normalized = normalizer.fit_transform(samples)
|
34
|
+
|
35
|
+
transformer = SVMKit::KernelApproximation::RBF.new(gamma: 2.0, n_components: 1024, random_seed: 1)
|
36
|
+
transformed = transformer.fit_transform(normalized)
|
37
|
+
|
38
|
+
base_classifier =
|
39
|
+
SVMKit::LinearModel::PegasosSVC.new(penalty: 1.0, max_iter: 50, batch_size: 20, random_seed: 1)
|
40
|
+
classifier = SVMKit::Multiclass::OneVsRestClassifier.new(estimator: base_classifier)
|
41
|
+
classifier.fit(transformed, labels)
|
42
|
+
|
43
|
+
File.open('trained_normalizer.dat', 'wb') { |f| f.write(Marshal.dump(normalizer)) }
|
44
|
+
File.open('trained_transformer.dat', 'wb') { |f| f.write(Marshal.dump(transformer)) }
|
45
|
+
File.open('trained_classifier.dat', 'wb') { |f| f.write(Marshal.dump(classifier)) }
|
46
|
+
```
|
47
|
+
|
48
|
+
Testing phase:
|
49
|
+
```ruby
|
50
|
+
require 'svmkit'
|
51
|
+
require 'libsvmloader'
|
52
|
+
|
53
|
+
samples, labels = LibSVMLoader.load_libsvm_file('pendigits.t', stype: :dense)
|
54
|
+
|
55
|
+
normalizer = Marshal.load(File.binread('trained_normalizer.dat'))
|
56
|
+
transformer = Marshal.load(File.binread('trained_transformer.dat'))
|
57
|
+
classifier = Marshal.load(File.binread('trained_classifier.dat'))
|
58
|
+
|
59
|
+
normalized = normalizer.transform(samples)
|
60
|
+
transformed = transformer.transform(normalized)
|
61
|
+
|
62
|
+
puts(sprintf("Accuracy: %.1f%%", 100.0 * classifier.score(transformed, labels)))
|
63
|
+
```
|
64
|
+
|
65
|
+
## Development
|
66
|
+
|
67
|
+
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
|
68
|
+
|
69
|
+
To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and tags, and push the `.gem` file to [rubygems.org](https://rubygems.org).
|
70
|
+
|
71
|
+
## Contributing
|
72
|
+
|
73
|
+
Bug reports and pull requests are welcome on GitHub at https://github.com/yoshoku/svmkit.
|
74
|
+
This project is intended to be a safe, welcoming space for collaboration,
|
75
|
+
and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
|
76
|
+
|
77
|
+
## License
|
78
|
+
|
79
|
+
The gem is available as open source under the terms of the [BSD 2-clause License](https://opensource.org/licenses/BSD-2-Clause).
|
80
|
+
|
81
|
+
## Code of Conduct
|
82
|
+
|
83
|
+
Everyone interacting in the SVMKit project’s codebases, issue trackers,
|
84
|
+
chat rooms and mailing lists is expected to follow the [code of conduct](https://github.com/yoshoku/svmkit/blob/master/CODE_OF_CONDUCT.md).
|
data/Rakefile
ADDED
data/bin/console
ADDED
@@ -0,0 +1,14 @@
|
|
1
|
+
#!/usr/bin/env ruby
|
2
|
+
|
3
|
+
require "bundler/setup"
|
4
|
+
require "svmkit"
|
5
|
+
|
6
|
+
# You can add fixtures and/or initialization code here to make experimenting
|
7
|
+
# with your gem easier. You can also use a different console, if you like.
|
8
|
+
|
9
|
+
# (If you use this, don't forget to add pry to your Gemfile!)
|
10
|
+
# require "pry"
|
11
|
+
# Pry.start
|
12
|
+
|
13
|
+
require "irb"
|
14
|
+
IRB.start(__FILE__)
|
data/bin/setup
ADDED
data/lib/svmkit.rb
ADDED
@@ -0,0 +1,16 @@
|
|
1
|
+
begin
|
2
|
+
require 'nmatrix/nmatrix'
|
3
|
+
rescue LoadError
|
4
|
+
end
|
5
|
+
|
6
|
+
require 'svmkit/version'
|
7
|
+
require 'svmkit/utils'
|
8
|
+
require 'svmkit/base/base_estimator'
|
9
|
+
require 'svmkit/base/classifier'
|
10
|
+
require 'svmkit/base/transformer'
|
11
|
+
require 'svmkit/kernel_approximation/rbf'
|
12
|
+
require 'svmkit/linear_model/pegasos_svc'
|
13
|
+
require 'svmkit/multiclass/one_vs_rest_classifier'
|
14
|
+
require 'svmkit/preprocessing/l2_normalizer'
|
15
|
+
require 'svmkit/preprocessing/min_max_scaler'
|
16
|
+
require 'svmkit/preprocessing/standard_scaler'
|
@@ -0,0 +1,22 @@
|
|
1
|
+
|
2
|
+
module SVMKit
|
3
|
+
module Base
|
4
|
+
# Module for all classifiers in SVMKit.
|
5
|
+
module Classifier
|
6
|
+
# An abstract method for fitting a model.
|
7
|
+
def fit
|
8
|
+
raise NotImplementedError, "#{__method__} has to be implemented in #{self.class}."
|
9
|
+
end
|
10
|
+
|
11
|
+
# An abstract method for predicting labels.
|
12
|
+
def predict
|
13
|
+
raise NotImplementedError, "#{__method__} has to be implemented in #{self.class}."
|
14
|
+
end
|
15
|
+
|
16
|
+
# An abstract method for calculating classification accuracy.
|
17
|
+
def score
|
18
|
+
raise NotImplementedError, "#{__method__} has to be implemented in #{self.class}."
|
19
|
+
end
|
20
|
+
end
|
21
|
+
end
|
22
|
+
end
|
@@ -0,0 +1,17 @@
|
|
1
|
+
|
2
|
+
module SVMKit
|
3
|
+
module Base
|
4
|
+
# Module for all transfomers in SVMKit.
|
5
|
+
module Transformer
|
6
|
+
# An abstract method for fitting a model.
|
7
|
+
def fit
|
8
|
+
raise NotImplementedError, "#{__method__} has to be implemented in #{self.class}."
|
9
|
+
end
|
10
|
+
|
11
|
+
# An abstract method for fitting a model and transforming given data.
|
12
|
+
def fit_transform
|
13
|
+
raise NotImplementedError, "#{__method__} has to be implemented in #{self.class}."
|
14
|
+
end
|
15
|
+
end
|
16
|
+
end
|
17
|
+
end
|
@@ -0,0 +1,133 @@
|
|
1
|
+
require 'svmkit/base/base_estimator'
|
2
|
+
require 'svmkit/base/transformer'
|
3
|
+
|
4
|
+
module SVMKit
|
5
|
+
# Module for kernel approximation algorithms.
|
6
|
+
module KernelApproximation
|
7
|
+
# Class for RBF kernel feature mapping.
|
8
|
+
#
|
9
|
+
# transformer = SVMKit::KernelApproximation::RBF.new(gamma: 1.0, n_coponents: 128, random_seed: 1)
|
10
|
+
# new_training_samples = transformer.fit_transform(training_samples)
|
11
|
+
# new_testing_samples = transformer.transform(testing_samples)
|
12
|
+
#
|
13
|
+
# * *Refernce*:
|
14
|
+
# - A. Rahimi and B. Recht, "Random Features for Large-Scale Kernel Machines," Proc. NIPS'07, pp.1177--1184, 2007.
|
15
|
+
class RBF
|
16
|
+
include Base::BaseEstimator
|
17
|
+
include Base::Transformer
|
18
|
+
|
19
|
+
DEFAULT_PARAMS = { # :nodoc:
|
20
|
+
gamma: 1.0,
|
21
|
+
n_components: 128,
|
22
|
+
random_seed: nil
|
23
|
+
}.freeze
|
24
|
+
|
25
|
+
# The random matrix for transformation.
|
26
|
+
attr_reader :random_mat # :nodoc:
|
27
|
+
|
28
|
+
# The random vector for transformation.
|
29
|
+
attr_reader :random_vec # :nodoc:
|
30
|
+
|
31
|
+
# The random generator for transformation.
|
32
|
+
attr_reader :rng # :nodoc:
|
33
|
+
|
34
|
+
# Creates a new transformer for mapping to RBF kernel feature space.
|
35
|
+
#
|
36
|
+
# call-seq:
|
37
|
+
# new(gamma: 1.0, n_components: 128, random_seed: 1) -> RBF
|
38
|
+
#
|
39
|
+
# * *Arguments* :
|
40
|
+
# - +:gamma+ (Float) (defaults to: 1.0) -- The parameter of RBF kernel: exp(-gamma * x^2)
|
41
|
+
# - +:n_components+ (Integer) (defaults to: 128) -- The number of dimensions of the RBF kernel feature space.
|
42
|
+
# - +:random_seed+ (Integer) (defaults to: nil) -- The seed value using to initialize the random generator.
|
43
|
+
def initialize(params = {})
|
44
|
+
self.params = DEFAULT_PARAMS.merge(Hash[params.map { |k, v| [k.to_sym, v] }])
|
45
|
+
self.params[:random_seed] ||= srand
|
46
|
+
@rng = Random.new(self.params[:random_seed])
|
47
|
+
@random_mat = nil
|
48
|
+
@random_vec = nil
|
49
|
+
end
|
50
|
+
|
51
|
+
# Fit the model with given training data.
|
52
|
+
#
|
53
|
+
# call-seq:
|
54
|
+
# fit(x) -> RBF
|
55
|
+
#
|
56
|
+
# * *Arguments* :
|
57
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The training data to be used for fitting the model. This method uses only the number of features of the data.
|
58
|
+
# * *Returns* :
|
59
|
+
# - The learned transformer itself.
|
60
|
+
def fit(x, _y = nil)
|
61
|
+
n_features = x.shape[1]
|
62
|
+
params[:n_components] = 2 * n_features if params[:n_components] <= 0
|
63
|
+
@random_mat = rand_normal([n_features, params[:n_components]]) * (2.0 * params[:gamma])**0.5
|
64
|
+
n_half_components = params[:n_components] / 2
|
65
|
+
@random_vec = NMatrix.zeros([1, params[:n_components] - n_half_components]).hconcat(
|
66
|
+
NMatrix.ones([1, n_half_components]) * (0.5 * Math::PI)
|
67
|
+
)
|
68
|
+
#@random_vec = rand_uniform([1, self.params[:n_components]]) * (2.0 * Math::PI)
|
69
|
+
self
|
70
|
+
end
|
71
|
+
|
72
|
+
# Fit the model with training data, and then transform them with the learned model.
|
73
|
+
#
|
74
|
+
# call-seq:
|
75
|
+
# fit_transform(x) -> NMatrix
|
76
|
+
#
|
77
|
+
# * *Arguments* :
|
78
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The training data to be used for fitting the model.
|
79
|
+
# * *Returns* :
|
80
|
+
# - The transformed data (NMatrix, shape: [n_samples, n_components]).
|
81
|
+
def fit_transform(x, _y = nil)
|
82
|
+
fit(x).transform(x)
|
83
|
+
end
|
84
|
+
|
85
|
+
# Transform the given data with the learned model.
|
86
|
+
#
|
87
|
+
# call-seq:
|
88
|
+
# transform(x) -> NMatrix
|
89
|
+
#
|
90
|
+
# * *Arguments* :
|
91
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The data to be transformed with the learned model.
|
92
|
+
# * *Returns* :
|
93
|
+
# - The transformed data (NMatrix, shape: [n_samples, n_components]).
|
94
|
+
def transform(x)
|
95
|
+
n_samples, = x.shape
|
96
|
+
projection = x.dot(@random_mat) + @random_vec.repeat(n_samples, 0)
|
97
|
+
projection.sin * ((2.0 / params[:n_components])**0.5)
|
98
|
+
end
|
99
|
+
|
100
|
+
# Serializes object through Marshal#dump.
|
101
|
+
def marshal_dump # :nodoc:
|
102
|
+
{ params: params,
|
103
|
+
random_mat: Utils.dump_nmatrix(@random_mat),
|
104
|
+
random_vec: Utils.dump_nmatrix(@random_vec),
|
105
|
+
rng: @rng }
|
106
|
+
end
|
107
|
+
|
108
|
+
# Deserialize object through Marshal#load.
|
109
|
+
def marshal_load(obj) # :nodoc:
|
110
|
+
self.params = obj[:params]
|
111
|
+
@random_mat = Utils.restore_nmatrix(obj[:random_mat])
|
112
|
+
@random_vec = Utils.restore_nmatrix(obj[:random_vec])
|
113
|
+
@rng = obj[:rng]
|
114
|
+
nil
|
115
|
+
end
|
116
|
+
|
117
|
+
protected
|
118
|
+
|
119
|
+
# Generate the uniform random matrix with the given shape.
|
120
|
+
def rand_uniform(shape) # :nodoc:
|
121
|
+
rnd_vals = Array.new(NMatrix.size(shape)) { @rng.rand }
|
122
|
+
NMatrix.new(shape, rnd_vals, dtype: :float64, stype: :dense)
|
123
|
+
end
|
124
|
+
|
125
|
+
# Generate the normal random matrix with the given shape, mean, and standard deviation.
|
126
|
+
def rand_normal(shape, mu = 0.0, sigma = 1.0) # :nodoc:
|
127
|
+
a = rand_uniform(shape)
|
128
|
+
b = rand_uniform(shape)
|
129
|
+
((a.log * -2.0).sqrt * (b * 2.0 * Math::PI).sin) * sigma + mu
|
130
|
+
end
|
131
|
+
end
|
132
|
+
end
|
133
|
+
end
|
@@ -0,0 +1,148 @@
|
|
1
|
+
require 'svmkit/base/base_estimator'
|
2
|
+
require 'svmkit/base/classifier'
|
3
|
+
|
4
|
+
module SVMKit
|
5
|
+
# This module consists of the classes that implement generalized linear models.
|
6
|
+
module LinearModel
|
7
|
+
# PegasosSVC is a class that implements Support Vector Classifier with the Pegasos algorithm.
|
8
|
+
#
|
9
|
+
# estimator =
|
10
|
+
# SVMKit::LinearModel::PegasosSVC.new(reg_param: 1.0, max_iter: 100, batch_size: 20, random_seed: 1)
|
11
|
+
# estimator.fit(training_samples, traininig_labels)
|
12
|
+
# results = estimator.predict(testing_samples)
|
13
|
+
#
|
14
|
+
# * *Reference*:
|
15
|
+
# - S. Shalev-Shwartz and Y. Singer, "Pegasos: Primal Estimated sub-GrAdient SOlver for SVM," Proc. ICML'07, pp. 807--814, 2007.
|
16
|
+
#
|
17
|
+
class PegasosSVC
|
18
|
+
include Base::BaseEstimator
|
19
|
+
include Base::Classifier
|
20
|
+
|
21
|
+
DEFAULT_PARAMS = { # :nodoc:
|
22
|
+
reg_param: 1.0,
|
23
|
+
max_iter: 100,
|
24
|
+
batch_size: 50,
|
25
|
+
random_seed: nil
|
26
|
+
}.freeze
|
27
|
+
|
28
|
+
# The weight vector for SVC.
|
29
|
+
attr_reader :weight_vec
|
30
|
+
|
31
|
+
# The random generator for performing random sampling in the Pegasos algorithm.
|
32
|
+
attr_reader :rng
|
33
|
+
|
34
|
+
# Create a new classifier with Support Vector Machine by the Pegasos algorithm.
|
35
|
+
#
|
36
|
+
# :call-seq:
|
37
|
+
# new(reg_param: 1.0, max_iter: 100, batch_size: 50, random_seed: 1) -> PegasosSVC
|
38
|
+
#
|
39
|
+
# * *Arguments* :
|
40
|
+
# - +:reg_param+ (Float) (defaults to: 1.0) -- The regularization parameter.
|
41
|
+
# - +:max_iter+ (Integer) (defaults to: 100) -- The maximum number of iterations.
|
42
|
+
# - +:batch_size+ (Integer) (defaults to: 50) -- The size of the mini batches.
|
43
|
+
# - +:random_seed+ (Integer) (defaults to: nil) -- The seed value using to initialize the random generator.
|
44
|
+
def initialize(params = {})
|
45
|
+
self.params = DEFAULT_PARAMS.merge(Hash[params.map { |k, v| [k.to_sym, v] }])
|
46
|
+
self.params[:random_seed] ||= srand
|
47
|
+
@weight_vec = nil
|
48
|
+
@rng = Random.new(self.params[:random_seed])
|
49
|
+
end
|
50
|
+
|
51
|
+
# Fit the model with given training data.
|
52
|
+
#
|
53
|
+
# :call-seq:
|
54
|
+
# fit(x, y) -> PegasosSVC
|
55
|
+
#
|
56
|
+
# * *Arguments* :
|
57
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The training data to be used for fitting the model.
|
58
|
+
# - +y+ (NMatrix, shape: [1, n_samples]) -- The labels to be used for fitting the model.
|
59
|
+
# * *Returns* :
|
60
|
+
# - The learned classifier itself.
|
61
|
+
def fit(x, y)
|
62
|
+
# Generate binary labels
|
63
|
+
negative_label = y.uniq.sort.shift
|
64
|
+
bin_y = y.to_flat_a.map { |l| l != negative_label ? 1 : -1 }
|
65
|
+
# Initialize some variables.
|
66
|
+
n_samples, n_features = x.shape
|
67
|
+
rand_ids = [*0..n_samples - 1].shuffle(random: @rng)
|
68
|
+
@weight_vec = NMatrix.zeros([1, n_features])
|
69
|
+
# Start optimization.
|
70
|
+
params[:max_iter].times do |t|
|
71
|
+
# random sampling
|
72
|
+
subset_ids = rand_ids.shift(params[:batch_size])
|
73
|
+
rand_ids.concat(subset_ids)
|
74
|
+
target_ids = subset_ids.map do |n|
|
75
|
+
n if @weight_vec.dot(x.row(n).transpose) * bin_y[n] < 1
|
76
|
+
end
|
77
|
+
n_subsamples = target_ids.size
|
78
|
+
next if n_subsamples.zero?
|
79
|
+
# update the weight vector.
|
80
|
+
eta = 1.0 / (params[:reg_param] * (t + 1))
|
81
|
+
mean_vec = NMatrix.zeros([1, n_features])
|
82
|
+
target_ids.each { |n| mean_vec += x.row(n) * bin_y[n] }
|
83
|
+
mean_vec *= eta / n_subsamples
|
84
|
+
@weight_vec = @weight_vec * (1.0 - eta * params[:reg_param]) + mean_vec
|
85
|
+
# scale the weight vector.
|
86
|
+
scaler = (1.0 / params[:reg_param]**0.5) / @weight_vec.norm2
|
87
|
+
@weight_vec *= [1.0, scaler].min
|
88
|
+
end
|
89
|
+
self
|
90
|
+
end
|
91
|
+
|
92
|
+
# Calculate confidence scores for samples.
|
93
|
+
#
|
94
|
+
# :call-seq:
|
95
|
+
# decision_function(x) -> NMatrix, shape: [1, n_samples]
|
96
|
+
#
|
97
|
+
# * *Arguments* :
|
98
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to compute the scores.
|
99
|
+
# * *Returns* :
|
100
|
+
# - Confidence score per sample.
|
101
|
+
def decision_function(x)
|
102
|
+
@weight_vec.dot(x.transpose)
|
103
|
+
end
|
104
|
+
|
105
|
+
# Predict class labels for samples.
|
106
|
+
#
|
107
|
+
# :call-seq:
|
108
|
+
# predict(x) -> NMatrix, shape: [1, n_samples]
|
109
|
+
#
|
110
|
+
# * *Arguments* :
|
111
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to predict the labels.
|
112
|
+
# * *Returns* :
|
113
|
+
# - Predicted class label per sample.
|
114
|
+
def predict(x)
|
115
|
+
decision_function(x).map { |v| v >= 0 ? 1 : -1 }
|
116
|
+
end
|
117
|
+
|
118
|
+
# Claculate the mean accuracy of the given testing data.
|
119
|
+
#
|
120
|
+
# :call-seq:
|
121
|
+
# predict(x, y) -> Float
|
122
|
+
#
|
123
|
+
# * *Arguments* :
|
124
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- Testing data.
|
125
|
+
# - +y+ (NMatrix, shape: [1, n_samples]) -- True labels for testing data.
|
126
|
+
# * *Returns* :
|
127
|
+
# - Mean accuracy
|
128
|
+
def score(x, y)
|
129
|
+
p = predict(x)
|
130
|
+
n_hits = (y.to_flat_a.map.with_index { |l, n| l == p[n] ? 1 : 0 }).inject(:+)
|
131
|
+
n_hits / y.size.to_f
|
132
|
+
end
|
133
|
+
|
134
|
+
# Serializes object through Marshal#dump.
|
135
|
+
def marshal_dump # :nodoc:
|
136
|
+
{ params: params, weight_vec: Utils.dump_nmatrix(@weight_vec), rng: @rng }
|
137
|
+
end
|
138
|
+
|
139
|
+
# Deserialize object through Marshal#load.
|
140
|
+
def marshal_load(obj) # :nodoc:
|
141
|
+
self.params = obj[:params]
|
142
|
+
@weight_vec = Utils.restore_nmatrix(obj[:weight_vec])
|
143
|
+
@rng = obj[:rng]
|
144
|
+
nil
|
145
|
+
end
|
146
|
+
end
|
147
|
+
end
|
148
|
+
end
|
@@ -0,0 +1,127 @@
|
|
1
|
+
require 'svmkit/base/base_estimator.rb'
|
2
|
+
require 'svmkit/base/classifier.rb'
|
3
|
+
|
4
|
+
module SVMKit
|
5
|
+
# This module consists of the classes that implement multi-label classification strategy.
|
6
|
+
module Multiclass
|
7
|
+
# OneVsRestClassifier is a class that implements One-vs-Rest (OvR) strategy for multi-label classification.
|
8
|
+
#
|
9
|
+
# base_estimator =
|
10
|
+
# SVMKit::LinearModel::PegasosSVC.new(penalty: 1.0, max_iter: 100, batch_size: 20, random_seed: 1)
|
11
|
+
# estimator = SVMKit::Multiclass::OneVsRestClassifier.new(estimator: base_estimator)
|
12
|
+
# estimator.fit(training_samples, training_labels)
|
13
|
+
# results = estimator.predict(testing_samples)
|
14
|
+
#
|
15
|
+
class OneVsRestClassifier
|
16
|
+
include Base::BaseEstimator
|
17
|
+
include Base::Classifier
|
18
|
+
|
19
|
+
DEFAULT_PARAMS = { # :nodoc:
|
20
|
+
estimator: nil
|
21
|
+
}.freeze
|
22
|
+
|
23
|
+
# The set of estimators.
|
24
|
+
attr_reader :estimators
|
25
|
+
|
26
|
+
# The class labels.
|
27
|
+
attr_reader :classes
|
28
|
+
|
29
|
+
# Create a new multi-label classifier with the one-vs-rest startegy.
|
30
|
+
#
|
31
|
+
# :call-seq:
|
32
|
+
# new(estimator: base_estimator) -> OneVsRestClassifier
|
33
|
+
#
|
34
|
+
# * *Arguments* :
|
35
|
+
# - +:estimator+ (Classifier) (defaults to: nil) -- The (binary) classifier for construction a multi-label classifier.
|
36
|
+
def initialize(params = {})
|
37
|
+
self.params = DEFAULT_PARAMS.merge(Hash[params.map { |k, v| [k.to_sym, v] }])
|
38
|
+
@estimators = nil
|
39
|
+
@classes = nil
|
40
|
+
end
|
41
|
+
|
42
|
+
# Fit the model with given training data.
|
43
|
+
#
|
44
|
+
# :call-seq:
|
45
|
+
# fit(x, y) -> OneVsRestClassifier
|
46
|
+
#
|
47
|
+
# * *Arguments* :
|
48
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The training data to be used for fitting the model.
|
49
|
+
# - +y+ (NMatrix, shape: [1, n_samples]) -- The labels to be used for fitting the model.
|
50
|
+
# * *Returns* :
|
51
|
+
# - The learned classifier itself.
|
52
|
+
def fit(x, y)
|
53
|
+
@classes = y.uniq.sort
|
54
|
+
@estimators = @classes.map do |label|
|
55
|
+
bin_y = y.map { |l| l == label ? 1 : -1 }
|
56
|
+
params[:estimator].dup.fit(x, bin_y)
|
57
|
+
end
|
58
|
+
self
|
59
|
+
end
|
60
|
+
|
61
|
+
# Calculate confidence scores for samples.
|
62
|
+
#
|
63
|
+
# :call-seq:
|
64
|
+
# decision_function(x) -> NMatrix, shape: [n_samples, n_classes]
|
65
|
+
#
|
66
|
+
# * *Arguments* :
|
67
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to compute the scores.
|
68
|
+
# * *Returns* :
|
69
|
+
# - Confidence scores per sample for each class.
|
70
|
+
def decision_function(x)
|
71
|
+
n_samples, = x.shape
|
72
|
+
n_classes = @classes.size
|
73
|
+
NMatrix.new(
|
74
|
+
[n_classes, n_samples],
|
75
|
+
Array.new(n_classes) { |m| @estimators[m].decision_function(x).to_a }.flatten
|
76
|
+
).transpose
|
77
|
+
end
|
78
|
+
|
79
|
+
# Predict class labels for samples.
|
80
|
+
#
|
81
|
+
# :call-seq:
|
82
|
+
# predict(x) -> NMatrix, shape: [1, n_samples]
|
83
|
+
#
|
84
|
+
# * *Arguments* :
|
85
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to predict the labels.
|
86
|
+
# * *Returns* :
|
87
|
+
# - Predicted class label per sample.
|
88
|
+
def predict(x)
|
89
|
+
n_samples, = x.shape
|
90
|
+
decision_values = decision_function(x)
|
91
|
+
NMatrix.new([1, n_samples],
|
92
|
+
decision_values.each_row.map { |vals| @classes[vals.to_a.index(vals.to_a.max)] })
|
93
|
+
end
|
94
|
+
|
95
|
+
# Claculate the mean accuracy of the given testing data.
|
96
|
+
#
|
97
|
+
# :call-seq:
|
98
|
+
# predict(x, y) -> Float
|
99
|
+
#
|
100
|
+
# * *Arguments* :
|
101
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- Testing data.
|
102
|
+
# - +y+ (NMatrix, shape: [1, n_samples]) -- True labels for testing data.
|
103
|
+
# * *Returns* :
|
104
|
+
# - Mean accuracy
|
105
|
+
def score(x, y)
|
106
|
+
p = predict(x)
|
107
|
+
n_hits = (y.to_flat_a.map.with_index { |l, n| l == p[n] ? 1 : 0 }).inject(:+)
|
108
|
+
n_hits / y.size.to_f
|
109
|
+
end
|
110
|
+
|
111
|
+
# Serializes object through Marshal#dump.
|
112
|
+
def marshal_dump # :nodoc:
|
113
|
+
{ params: params,
|
114
|
+
classes: @classes,
|
115
|
+
estimators: @estimators.map { |e| Marshal.dump(e) } }
|
116
|
+
end
|
117
|
+
|
118
|
+
# Deserialize object through Marshal#load.
|
119
|
+
def marshal_load(obj) # :nodoc:
|
120
|
+
self.params = obj[:params]
|
121
|
+
@classes = obj[:classes]
|
122
|
+
@estimators = obj[:estimators].map { |e| Marshal.load(e) }
|
123
|
+
nil
|
124
|
+
end
|
125
|
+
end
|
126
|
+
end
|
127
|
+
end
|
@@ -0,0 +1,57 @@
|
|
1
|
+
require 'svmkit/base/base_estimator'
|
2
|
+
require 'svmkit/base/transformer'
|
3
|
+
|
4
|
+
module SVMKit
|
5
|
+
# This module consists of the classes that perform preprocessings.
|
6
|
+
module Preprocessing
|
7
|
+
# Normalize samples to unit L2-norm.
|
8
|
+
#
|
9
|
+
# normalizer = SVMKit::Preprocessing::StandardScaler.new
|
10
|
+
# new_samples = normalizer.fit_transform(samples)
|
11
|
+
class L2Normalizer
|
12
|
+
include Base::BaseEstimator
|
13
|
+
include Base::Transformer
|
14
|
+
|
15
|
+
# The vector consists of norms of each sample.
|
16
|
+
attr_reader :norm_vec # :nodoc:
|
17
|
+
|
18
|
+
# Create a new normalizer for normaliing to unit L2-norm.
|
19
|
+
#
|
20
|
+
# :call-seq:
|
21
|
+
# new() -> L2Normalizer
|
22
|
+
def initialize(_params = {})
|
23
|
+
@norm_vec = nil
|
24
|
+
end
|
25
|
+
|
26
|
+
# Calculate L2 norms of each sample.
|
27
|
+
#
|
28
|
+
# :call-seq:
|
29
|
+
# fit(x) -> L2Normalizer
|
30
|
+
#
|
31
|
+
# * *Arguments* :
|
32
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to calculate L2-norms.
|
33
|
+
# * *Returns* :
|
34
|
+
# - L2Normalizer
|
35
|
+
def fit(x, _y = nil)
|
36
|
+
n_samples, = x.shape
|
37
|
+
@norm_vec = NMatrix.new([1, n_samples],
|
38
|
+
Array.new(n_samples) { |n| x.row(n).norm2 })
|
39
|
+
self
|
40
|
+
end
|
41
|
+
|
42
|
+
# Calculate L2 norms of each sample, and then normalize samples to unit L2-norm.
|
43
|
+
#
|
44
|
+
# :call-seq:
|
45
|
+
# fit_transform(x) -> NMatrix
|
46
|
+
#
|
47
|
+
# * *Arguments* :
|
48
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to calculate L2-norms.
|
49
|
+
# * *Returns* :
|
50
|
+
# - The normalized samples (NMatrix)
|
51
|
+
def fit_transform(x, _y = nil)
|
52
|
+
fit(x)
|
53
|
+
x / @norm_vec.transpose.repeat(x.shape[1], 1)
|
54
|
+
end
|
55
|
+
end
|
56
|
+
end
|
57
|
+
end
|
@@ -0,0 +1,99 @@
|
|
1
|
+
require 'svmkit/base/base_estimator'
|
2
|
+
require 'svmkit/base/transformer'
|
3
|
+
|
4
|
+
module SVMKit
|
5
|
+
# This module consists of the classes that perform preprocessings.
|
6
|
+
module Preprocessing
|
7
|
+
# Normalize samples by scaling each feature to a given range.
|
8
|
+
#
|
9
|
+
# normalizer = SVMKit::Preprocessing::MinMaxScaler.new(feature_range: [0.0, 1.0])
|
10
|
+
# new_training_samples = normalizer.fit_transform(training_samples)
|
11
|
+
# new_testing_samples = normalizer.transform(testing_samples)
|
12
|
+
class MinMaxScaler
|
13
|
+
include Base::BaseEstimator
|
14
|
+
include Base::Transformer
|
15
|
+
|
16
|
+
DEFAULT_PARAMS = { # :nodoc:
|
17
|
+
feature_range: [0.0, 1.0]
|
18
|
+
}.freeze
|
19
|
+
|
20
|
+
# The vector consists of the minimum value for each feature.
|
21
|
+
attr_reader :min_vec # :nodoc:
|
22
|
+
|
23
|
+
# The vector consists of the maximum value for each feature.
|
24
|
+
attr_reader :max_vec # :nodoc:
|
25
|
+
|
26
|
+
# Creates a new normalizer for scaling each feature to a given range.
|
27
|
+
#
|
28
|
+
# call-seq:
|
29
|
+
# new(feature_range: [0.0, 1.0]) -> MinMaxScaler
|
30
|
+
#
|
31
|
+
# * *Arguments* :
|
32
|
+
# - +:feature_range+ (Array) (defaults to: [0.0, 1.0]) -- The desired range of samples.
|
33
|
+
def initialize(params = {})
|
34
|
+
@params = DEFAULT_PARAMS.merge(Hash[params.map { |k, v| [k.to_sym, v] }])
|
35
|
+
@min_vec = nil
|
36
|
+
@max_vec = nil
|
37
|
+
end
|
38
|
+
|
39
|
+
# Calculate the minimum and maximum value of each feature for scaling.
|
40
|
+
#
|
41
|
+
# :call-seq:
|
42
|
+
# fit(x) -> MinMaxScaler
|
43
|
+
#
|
44
|
+
# * *Arguments* :
|
45
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to calculate the minimum and maximum values.
|
46
|
+
# * *Returns* :
|
47
|
+
# - MinMaxScaler
|
48
|
+
def fit(x, _y = nil)
|
49
|
+
@min_vec = x.min(0)
|
50
|
+
@max_vec = x.max(0)
|
51
|
+
self
|
52
|
+
end
|
53
|
+
|
54
|
+
# Calculate the minimum and maximum values, and then normalize samples to feature_range.
|
55
|
+
#
|
56
|
+
# :call-seq:
|
57
|
+
# fit_transform(x) -> NMatrix
|
58
|
+
#
|
59
|
+
# * *Arguments* :
|
60
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to calculate the minimum and maximum values.
|
61
|
+
# * *Returns* :
|
62
|
+
# - The scaled samples (NMatrix)
|
63
|
+
def fit_transform(x, _y = nil)
|
64
|
+
fit(x).transform(x)
|
65
|
+
end
|
66
|
+
|
67
|
+
# Perform scaling the given samples according to feature_range.
|
68
|
+
#
|
69
|
+
# call-seq:
|
70
|
+
# transform(x) -> NMatrix
|
71
|
+
#
|
72
|
+
# * *Arguments* :
|
73
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to be scaled.
|
74
|
+
# * *Returns* :
|
75
|
+
# - The scaled samples (NMatrix)
|
76
|
+
def transform(x)
|
77
|
+
n_samples, = x.shape
|
78
|
+
dif_vec = @max_vec - @min_vec
|
79
|
+
nx = (x - @min_vec.repeat(n_samples, 0)) / dif_vec.repeat(n_samples, 0)
|
80
|
+
nx * (@params[:feature_range][1] - @params[:feature_range][0]) + @params[:feature_range][0]
|
81
|
+
end
|
82
|
+
|
83
|
+
# Serializes object through Marshal#dump.
|
84
|
+
def marshal_dump # :nodoc:
|
85
|
+
{ params: @params,
|
86
|
+
min_vec: Utils.dump_nmatrix(@min_vec),
|
87
|
+
max_vec: Utils.dump_nmatrix(@max_vec) }
|
88
|
+
end
|
89
|
+
|
90
|
+
# Deserialize object through Marshal#load.
|
91
|
+
def marshal_load(obj) # :nodoc:
|
92
|
+
@params = obj[:params]
|
93
|
+
@min_vec = Utils.restore_nmatrix(obj[:min_vec])
|
94
|
+
@max_vec = Utils.restore_nmatrix(obj[:max_vec])
|
95
|
+
nil
|
96
|
+
end
|
97
|
+
end
|
98
|
+
end
|
99
|
+
end
|
@@ -0,0 +1,87 @@
|
|
1
|
+
require 'svmkit/base/base_estimator'
|
2
|
+
require 'svmkit/base/transformer'
|
3
|
+
|
4
|
+
module SVMKit
|
5
|
+
# This module consists of the classes that perform preprocessings.
|
6
|
+
module Preprocessing
|
7
|
+
# Normalize samples by centering and scaling to unit variance.
|
8
|
+
#
|
9
|
+
# normalizer = SVMKit::Preprocessing::StandardScaler.new
|
10
|
+
# new_training_samples = normalizer.fit_transform(training_samples)
|
11
|
+
# new_testing_samples = normalizer.transform(testing_samples)
|
12
|
+
class StandardScaler
|
13
|
+
include Base::BaseEstimator
|
14
|
+
include Base::Transformer
|
15
|
+
|
16
|
+
# The vector consists of the mean value for each feature.
|
17
|
+
attr_reader :mean_vec # :nodoc:
|
18
|
+
|
19
|
+
# The vector consists of the standard deviation for each feature.
|
20
|
+
attr_reader :std_vec # :nodoc:
|
21
|
+
|
22
|
+
# Create a new normalizer for centering and scaling to unit variance.
|
23
|
+
#
|
24
|
+
# :call-seq:
|
25
|
+
# new() -> StandardScaler
|
26
|
+
def initialize(_params = {})
|
27
|
+
@mean_vec = nil
|
28
|
+
@std_vec = nil
|
29
|
+
end
|
30
|
+
|
31
|
+
# Calculate the mean value and standard deviation of each feature for scaling.
|
32
|
+
#
|
33
|
+
# :call-seq:
|
34
|
+
# fit(x) -> StandardScaler
|
35
|
+
#
|
36
|
+
# * *Arguments* :
|
37
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to calculate the mean values and standard deviations.
|
38
|
+
# * *Returns* :
|
39
|
+
# - StandardScaler
|
40
|
+
def fit(x, _y = nil)
|
41
|
+
@mean_vec = x.mean(0)
|
42
|
+
@std_vec = x.std(0)
|
43
|
+
self
|
44
|
+
end
|
45
|
+
|
46
|
+
# Calculate the mean values and standard deviations, and then normalize samples using them.
|
47
|
+
#
|
48
|
+
# :call-seq:
|
49
|
+
# fit_transform(x) -> NMatrix
|
50
|
+
#
|
51
|
+
# * *Arguments* :
|
52
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to calculate the mean values and standard deviations.
|
53
|
+
# * *Returns* :
|
54
|
+
# - The scaled samples (NMatrix)
|
55
|
+
def fit_transform(x, _y = nil)
|
56
|
+
fit(x).transform(x)
|
57
|
+
end
|
58
|
+
|
59
|
+
# Perform standardization the given samples.
|
60
|
+
#
|
61
|
+
# call-seq:
|
62
|
+
# transform(x) -> NMatrix
|
63
|
+
#
|
64
|
+
# * *Arguments* :
|
65
|
+
# - +x+ (NMatrix, shape: [n_samples, n_features]) -- The samples to be scaled.
|
66
|
+
# * *Returns* :
|
67
|
+
# - The scaled samples (NMatrix)
|
68
|
+
def transform(x)
|
69
|
+
n_samples, = x.shape
|
70
|
+
(x - @mean_vec.repeat(n_samples, 0)) / @std_vec.repeat(n_samples, 0)
|
71
|
+
end
|
72
|
+
|
73
|
+
# Serializes object through Marshal#dump.
|
74
|
+
def marshal_dump # :nodoc:
|
75
|
+
{ mean_vec: Utils.dump_nmatrix(@mean_vec),
|
76
|
+
std_vec: Utils.dump_nmatrix(@std_vec) }
|
77
|
+
end
|
78
|
+
|
79
|
+
# Deserialize object through Marshal#load.
|
80
|
+
def marshal_load(obj) # :nodoc:
|
81
|
+
@mean_vec = Utils.restore_nmatrix(obj[:mean_vec])
|
82
|
+
@std_vec = Utils.restore_nmatrix(obj[:std_vec])
|
83
|
+
nil
|
84
|
+
end
|
85
|
+
end
|
86
|
+
end
|
87
|
+
end
|
data/lib/svmkit/utils.rb
ADDED
@@ -0,0 +1,33 @@
|
|
1
|
+
module SVMKit
|
2
|
+
# Module for utility methods.
|
3
|
+
module Utils
|
4
|
+
class << self
|
5
|
+
# Dump an NMatrix object converted to a Ruby Hash.
|
6
|
+
# # call-seq:
|
7
|
+
# dump_nmatrix(mat) -> Hash
|
8
|
+
#
|
9
|
+
# * *Arguments* :
|
10
|
+
# - +mat+ -- An NMatrix object converted to a Ruby Hash.
|
11
|
+
# * *Returns* :
|
12
|
+
# - A Ruby Hash containing matrix information.
|
13
|
+
def dump_nmatrix(mat)
|
14
|
+
return nil if mat.class != NMatrix
|
15
|
+
{ shape: mat.shape, array: mat.to_flat_a, dtype: mat.dtype, stype: mat.stype }
|
16
|
+
end
|
17
|
+
|
18
|
+
# Return the results of converting the dumped data into an NMatrix object.
|
19
|
+
#
|
20
|
+
# call-seq:
|
21
|
+
# restore_nmatrix(dumped_mat) -> NMatrix
|
22
|
+
#
|
23
|
+
# * *Arguments* :
|
24
|
+
# - +dumpted_mat+ -- A Ruby Hash about NMatrix object created with SVMKit::Utils.dump_nmatrix method.
|
25
|
+
# * *Returns* :
|
26
|
+
# - An NMatrix object restored from the given Hash.
|
27
|
+
def restore_nmatrix(dmp = {})
|
28
|
+
return nil unless dmp.class == Hash && %i[shape array dtype stype].all?(&dmp.method(:has_key?))
|
29
|
+
NMatrix.new(dmp[:shape], dmp[:array], dtype: dmp[:dtype], stype: dmp[:stype])
|
30
|
+
end
|
31
|
+
end
|
32
|
+
end
|
33
|
+
end
|
data/svmkit.gemspec
ADDED
@@ -0,0 +1,37 @@
|
|
1
|
+
# coding: utf-8
|
2
|
+
lib = File.expand_path('../lib', __FILE__)
|
3
|
+
$LOAD_PATH.unshift(lib) unless $LOAD_PATH.include?(lib)
|
4
|
+
|
5
|
+
require 'svmkit/version'
|
6
|
+
|
7
|
+
SVMKit::DESCRIPTION = <<MSG
|
8
|
+
SVMKit is a library for machine learninig in Ruby.
|
9
|
+
SVMKit implements machine learning algorithms with an interface similar to Scikit-Learn in Python.
|
10
|
+
However, since SVMKit is an experimental library, there are few machine learning algorithms implemented.
|
11
|
+
MSG
|
12
|
+
|
13
|
+
Gem::Specification.new do |spec|
|
14
|
+
spec.name = 'svmkit'
|
15
|
+
spec.version = SVMKit::VERSION
|
16
|
+
spec.authors = ['yoshoku']
|
17
|
+
spec.email = ['yoshoku@outlook.com']
|
18
|
+
|
19
|
+
spec.summary = %q{SVMKit is an experimental library of machine learning in Ruby.}
|
20
|
+
spec.description = SVMKit::DESCRIPTION
|
21
|
+
spec.homepage = 'https://github.com/yoshoku/svmkit'
|
22
|
+
spec.license = 'BSD-2-Clause'
|
23
|
+
|
24
|
+
spec.files = `git ls-files -z`.split("\x0").reject do |f|
|
25
|
+
f.match(%r{^(test|spec|features)/})
|
26
|
+
end
|
27
|
+
spec.bindir = 'exe'
|
28
|
+
spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
|
29
|
+
spec.require_paths = ['lib']
|
30
|
+
|
31
|
+
#spec.add_runtime_dependency 'nmatrix', '~> 0.2.3'
|
32
|
+
|
33
|
+
spec.add_development_dependency 'bundler', '~> 1.15'
|
34
|
+
spec.add_development_dependency 'rake', '~> 10.0'
|
35
|
+
spec.add_development_dependency 'rspec', '~> 3.0'
|
36
|
+
spec.add_development_dependency 'nmatrix', '~> 0.2.3'
|
37
|
+
end
|
metadata
ADDED
@@ -0,0 +1,128 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: svmkit
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.1.0
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- yoshoku
|
8
|
+
autorequire:
|
9
|
+
bindir: exe
|
10
|
+
cert_chain: []
|
11
|
+
date: 2017-09-30 00:00:00.000000000 Z
|
12
|
+
dependencies:
|
13
|
+
- !ruby/object:Gem::Dependency
|
14
|
+
name: bundler
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
16
|
+
requirements:
|
17
|
+
- - "~>"
|
18
|
+
- !ruby/object:Gem::Version
|
19
|
+
version: '1.15'
|
20
|
+
type: :development
|
21
|
+
prerelease: false
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
23
|
+
requirements:
|
24
|
+
- - "~>"
|
25
|
+
- !ruby/object:Gem::Version
|
26
|
+
version: '1.15'
|
27
|
+
- !ruby/object:Gem::Dependency
|
28
|
+
name: rake
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
30
|
+
requirements:
|
31
|
+
- - "~>"
|
32
|
+
- !ruby/object:Gem::Version
|
33
|
+
version: '10.0'
|
34
|
+
type: :development
|
35
|
+
prerelease: false
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
37
|
+
requirements:
|
38
|
+
- - "~>"
|
39
|
+
- !ruby/object:Gem::Version
|
40
|
+
version: '10.0'
|
41
|
+
- !ruby/object:Gem::Dependency
|
42
|
+
name: rspec
|
43
|
+
requirement: !ruby/object:Gem::Requirement
|
44
|
+
requirements:
|
45
|
+
- - "~>"
|
46
|
+
- !ruby/object:Gem::Version
|
47
|
+
version: '3.0'
|
48
|
+
type: :development
|
49
|
+
prerelease: false
|
50
|
+
version_requirements: !ruby/object:Gem::Requirement
|
51
|
+
requirements:
|
52
|
+
- - "~>"
|
53
|
+
- !ruby/object:Gem::Version
|
54
|
+
version: '3.0'
|
55
|
+
- !ruby/object:Gem::Dependency
|
56
|
+
name: nmatrix
|
57
|
+
requirement: !ruby/object:Gem::Requirement
|
58
|
+
requirements:
|
59
|
+
- - "~>"
|
60
|
+
- !ruby/object:Gem::Version
|
61
|
+
version: 0.2.3
|
62
|
+
type: :development
|
63
|
+
prerelease: false
|
64
|
+
version_requirements: !ruby/object:Gem::Requirement
|
65
|
+
requirements:
|
66
|
+
- - "~>"
|
67
|
+
- !ruby/object:Gem::Version
|
68
|
+
version: 0.2.3
|
69
|
+
description: |
|
70
|
+
SVMKit is a library for machine learninig in Ruby.
|
71
|
+
SVMKit implements machine learning algorithms with an interface similar to Scikit-Learn in Python.
|
72
|
+
However, since SVMKit is an experimental library, there are few machine learning algorithms implemented.
|
73
|
+
email:
|
74
|
+
- yoshoku@outlook.com
|
75
|
+
executables: []
|
76
|
+
extensions: []
|
77
|
+
extra_rdoc_files: []
|
78
|
+
files:
|
79
|
+
- ".gitignore"
|
80
|
+
- ".rspec"
|
81
|
+
- ".rubocop.yml"
|
82
|
+
- ".travis.yml"
|
83
|
+
- CODE_OF_CONDUCT.md
|
84
|
+
- Gemfile
|
85
|
+
- HISTORY.md
|
86
|
+
- LICENSE.txt
|
87
|
+
- README.md
|
88
|
+
- Rakefile
|
89
|
+
- bin/console
|
90
|
+
- bin/setup
|
91
|
+
- lib/svmkit.rb
|
92
|
+
- lib/svmkit/base/base_estimator.rb
|
93
|
+
- lib/svmkit/base/classifier.rb
|
94
|
+
- lib/svmkit/base/transformer.rb
|
95
|
+
- lib/svmkit/kernel_approximation/rbf.rb
|
96
|
+
- lib/svmkit/linear_model/pegasos_svc.rb
|
97
|
+
- lib/svmkit/multiclass/one_vs_rest_classifier.rb
|
98
|
+
- lib/svmkit/preprocessing/l2_normalizer.rb
|
99
|
+
- lib/svmkit/preprocessing/min_max_scaler.rb
|
100
|
+
- lib/svmkit/preprocessing/standard_scaler.rb
|
101
|
+
- lib/svmkit/utils.rb
|
102
|
+
- lib/svmkit/version.rb
|
103
|
+
- svmkit.gemspec
|
104
|
+
homepage: https://github.com/yoshoku/svmkit
|
105
|
+
licenses:
|
106
|
+
- BSD-2-Clause
|
107
|
+
metadata: {}
|
108
|
+
post_install_message:
|
109
|
+
rdoc_options: []
|
110
|
+
require_paths:
|
111
|
+
- lib
|
112
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
113
|
+
requirements:
|
114
|
+
- - ">="
|
115
|
+
- !ruby/object:Gem::Version
|
116
|
+
version: '0'
|
117
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
118
|
+
requirements:
|
119
|
+
- - ">="
|
120
|
+
- !ruby/object:Gem::Version
|
121
|
+
version: '0'
|
122
|
+
requirements: []
|
123
|
+
rubyforge_project:
|
124
|
+
rubygems_version: 2.6.13
|
125
|
+
signing_key:
|
126
|
+
specification_version: 4
|
127
|
+
summary: SVMKit is an experimental library of machine learning in Ruby.
|
128
|
+
test_files: []
|