libmf 0.1.2 → 0.1.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 907cba459d6d4af0607aa2662b0f32b5afc30a1940cbf6c4e7712553cb95709c
4
- data.tar.gz: 16e84358005ebb5432a2765c00dd79fe26f9822cbdfcb0570d829099c11e5470
3
+ metadata.gz: 316a859127d3ee4a6b2af41599daf5d14e1f436479dfd8d1c8ebd739b8141367
4
+ data.tar.gz: 59b64d67f90955b81630873bc2b776cbf5a463174502ccd3f9e92eb29810eac9
5
5
  SHA512:
6
- metadata.gz: 56e8dd57eb146cf81ffd11c193565e632f623d0ade5c2650344e19951502bd492becf6268066da6c630936e5bb08e3ee068d9ecb7f942457955c88b1faf9e5eb
7
- data.tar.gz: 2764dab3e8d5d3044bf78482988addeb40c026b592cb57ccec3984f70244eeaf77a0a8a4e597a8607d9cf41c74e69ab47391f1003ba163a45b84b010821a76c5
6
+ metadata.gz: 8b2d80a014a92dd78533e31476909aac348906ab0caba74d30826e9f7057bb5f9dd649f10c49d2d436325dbdedc8b590ab031c65197cc4c1b8be6290f525ace9
7
+ data.tar.gz: 9af9ef4372b7ed124bc3cbdc823e3dfbfc1e10303c914895fbaa608f5959a6de560349148cc6a83e35ef002409042705af198d5f533fbc0e3a87c6fbce0a438b
@@ -1,3 +1,8 @@
1
+ ## 0.1.3
2
+
3
+ - Made parameter names more Ruby-like
4
+ - No need to set `do_nmf` with generalized KL-divergence
5
+
1
6
  ## 0.1.2
2
7
 
3
8
  - Fixed bug in `p_factors` and `q_factors` methods
data/README.md CHANGED
@@ -2,8 +2,6 @@
2
2
 
3
3
  [LIBMF](https://github.com/cjlin1/libmf) - large-scale sparse matrix factorization - for Ruby
4
4
 
5
- :fire: Uses the C API for blazing performance
6
-
7
5
  [![Build Status](https://travis-ci.org/ankane/libmf.svg?branch=master)](https://travis-ci.org/ankane/libmf)
8
6
 
9
7
  ## Installation
@@ -65,36 +63,6 @@ Pass a validation set
65
63
  model.fit(data, eval_set: eval_set)
66
64
  ```
67
65
 
68
- ## Parameters
69
-
70
- Pass parameters
71
-
72
- ```ruby
73
- model = Libmf::Model.new(k: 20, nr_iters: 50)
74
- ```
75
-
76
- Supports the same parameters as LIBMF
77
-
78
- ```text
79
- variable meaning default
80
- ================================================================
81
- fun loss function 0
82
- k number of latent factors 8
83
- nr_threads number of threads used 12
84
- nr_bins number of bins 25
85
- nr_iters number of iterations 20
86
- lambda_p1 coefficient of L1-norm regularization on P 0
87
- lambda_p2 coefficient of L2-norm regularization on P 0.1
88
- lambda_q1 coefficient of L1-norm regularization on Q 0
89
- lambda_q2 coefficient of L2-norm regularization on Q 0.1
90
- eta learning rate 0.1
91
- alpha importance of negative entries 0.1
92
- c desired value of negative entries 0.0001
93
- do_nmf perform non-negative MF (NMF) false
94
- quiet no outputs to stdout false
95
- copy_data copy data in training procedure true
96
- ```
97
-
98
66
  ## Cross-Validation
99
67
 
100
68
  Perform cross-validation
@@ -109,6 +77,50 @@ Specify the number of folds
109
77
  model.cv(data, folds: 5)
110
78
  ```
111
79
 
80
+ ## Parameters
81
+
82
+ Pass parameters - default values below
83
+
84
+ ```ruby
85
+ Libmf::Model.new(
86
+ loss: 0, # loss function
87
+ factors: 8, # number of latent factors
88
+ threads: 12, # number of threads used
89
+ bins: 25, # number of bins
90
+ iterations: 20, # number of iterations
91
+ lambda_p1: 0, # coefficient of L1-norm regularization on P
92
+ lambda_p2: 0.1, # coefficient of L2-norm regularization on P
93
+ lambda_q1: 0, # coefficient of L1-norm regularization on Q
94
+ lambda_q2: 0.1, # coefficient of L2-norm regularization on Q
95
+ learning_rate: 0.1, # learning rate
96
+ alpha: 0.1, # importance of negative entries
97
+ c: 0.0001, # desired value of negative entries
98
+ nmf: false, # perform non-negative MF (NMF)
99
+ quiet: false, # no outputs to stdout
100
+ copy_data: true # copy data in training procedure
101
+ )
102
+ ```
103
+
104
+ ### Loss Functions
105
+
106
+ For real-valued matrix factorization
107
+
108
+ - 0 - squared error (L2-norm)
109
+ - 1 - absolute error (L1-norm)
110
+ - 2 - generalized KL-divergence
111
+
112
+ For binary matrix factorization
113
+
114
+ - 5 - logarithmic error
115
+ - 6 - squared hinge loss
116
+ - 7 - hinge loss
117
+
118
+ For one-class matrix factorization
119
+
120
+ - 10 - row-oriented pair-wise logarithmic loss
121
+ - 11 - column-oriented pair-wise logarithmic loss
122
+ - 12 - squared error (L2-norm)
123
+
112
124
  ## Resources
113
125
 
114
126
  - [LIBMF: A Library for Parallel Matrix Factorization in Shared-memory Systems](https://www.csie.ntu.edu.tw/~cjlin/papers/libmf/libmf_open_source.pdf)
@@ -68,11 +68,24 @@ module Libmf
68
68
 
69
69
  def param
70
70
  param = FFI.mf_get_default_param
71
+ options = @options.dup
71
72
  # silence insufficient blocks warning with default params
72
- options = {nr_bins: 25}.merge(@options)
73
+ options[:bins] ||= 25 unless options[:nr_bins]
74
+ options_map = {
75
+ :loss => :fun,
76
+ :factors => :k,
77
+ :threads => :nr_threads,
78
+ :bins => :nr_bins,
79
+ :iterations => :nr_iters,
80
+ :learning_rate => :eta,
81
+ :nmf => :do_nmf
82
+ }
73
83
  options.each do |k, v|
84
+ k = options_map[k] if options_map[k]
74
85
  param[k] = v
75
86
  end
87
+ # do_nmf must be true for generalized KL-divergence
88
+ param[:do_nmf] = true if param[:fun] == 2
76
89
  param
77
90
  end
78
91
 
@@ -1,3 +1,3 @@
1
1
  module Libmf
2
- VERSION = "0.1.2"
2
+ VERSION = "0.1.3"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: libmf
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.2
4
+ version: 0.1.3
5
5
  platform: ruby
6
6
  authors:
7
7
  - Andrew Kane
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2019-11-07 00:00:00.000000000 Z
11
+ date: 2019-11-08 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: ffi