tensor_stream 1.0.5 → 1.0.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 7efc04eb9823375da0fefce5086213e445c11410964c067ed09adcbddad089d2
4
- data.tar.gz: 63edff2f84664c2f86850a41270fb2711e65b8a6d1ebd9f88754b72280c5661a
3
+ metadata.gz: a3ecf49d32d385479e8dbf1c4e0ab57f120f3f0b1d78ab201d5032c42e5686f2
4
+ data.tar.gz: 8a299cbb8d49bac037d827f986478338cad7c3390897d3974557f4b65711da5b
5
5
  SHA512:
6
- metadata.gz: 0f80c668f87e41343dc1091379246816989dc5ffbe51e2141f5c35fb021b1f3872ef52bf9bd8d911ced3bd576883401904c250efa2a6aa2e47e60150f7c7821b
7
- data.tar.gz: 1bc34e5d8e82a4a9ed325a8fd7f57cc146cdff8879a84cf0d93cde61e574b14641d352c278bf020bfa1a1296af024526027b47b3c30184cb8bef37856d762ca3
6
+ metadata.gz: 37ed2c4b56bf7e859dfd458b87d93acdd1c0dec1079add7a7bee23493d679c2310c55a404f31ad55b64e4e7b2a6d0c46336b715752bc067242c3a496fcd7af70
7
+ data.tar.gz: be270b79c342832484726f83660e4b391328f41cbf61c4dd0d6f5e0016a040d1f3beac0916f7998b536b5f21336c46c187914b4fa1021c194c0f56fc4f83a4fc
@@ -223,6 +223,74 @@ vars = graph.get_collection(TensorStream::GraphKeys::GLOBAL_VARIABLES)
223
223
  => [Variable(Variable:0 shape: TensorShape([]) data_type: float32)]
224
224
  ```
225
225
 
226
+ High Performance Computing
227
+ --------------------------
228
+
229
+ TensorStream has been designed from the ground up to support multiple execution backends.
230
+
231
+ What this means is you can build your models once and then be able to execute them later on specialized hardware when available like GPUs.
232
+
233
+ An OpenCL backend is available that you can use for compute intensive taks like machine learning, especially those that use convolutional networks.
234
+
235
+ Using OpenCL is as simple as installing the tensorstream-opencl gem
236
+
237
+ ```
238
+ gem install tensor_stream-opencl
239
+ ```
240
+
241
+ You can then require the library in your programs and it will get used automatically (assuming you also installed OpenCL drivers for your system)
242
+
243
+ ```ruby
244
+ require 'tensor_stream'
245
+
246
+ # enable OpenCL
247
+ require 'tensor_stream/opencl'
248
+
249
+ tf = TensorStream
250
+
251
+ srand(5)
252
+ seed = 5
253
+ tf.set_random_seed(seed)
254
+
255
+ SHAPES = [32, 32]
256
+ tf = TensorStream
257
+ sess = tf.session
258
+ large_tensor = tf.constant(sess.run(tf.random_uniform([256, 256])))
259
+
260
+ sum_axis_1 = tf.reduce_sum(large_tensor, 1)
261
+ sess.run(sum_axis_1)
262
+ ```
263
+
264
+ Using OpenCL can improve performance dramatically in scenarios involving large tensors:
265
+
266
+ ```
267
+ Linux 4.15.0-46-generic #49-Ubuntu SMP
268
+ model name : AMD Ryzen 3 1300X Quad-Core Processor
269
+ OpenCL device NVIDIA CUDA GeForce GTX 1060 6GB
270
+ ruby 2.6.2p47 (2019-03-13 revision 67232) [x86_64-linux]
271
+
272
+ user system total real
273
+ pure ruby softmax : 0.024724 0.000000 0.024724 ( 0.024731)
274
+ opencl softmax : 0.006237 0.003945 0.010182 ( 0.009005)
275
+ pure ruby matmul : 0.679538 0.000000 0.679538 ( 0.680048)
276
+ opencl matmul : 0.003456 0.007965 0.011421 ( 0.008568)
277
+ pure ruby sum : 3.210619 0.000000 3.210619 ( 3.210064)
278
+ opencl sum : 0.002431 0.008030 0.010461 ( 0.007522)
279
+ pure ruby sum axis 1 : 3.208789 0.000000 3.208789 ( 3.208125)
280
+ opencl sum axis 1 : 0.006075 0.003963 0.010038 ( 0.007679)
281
+ pure ruby conv2d_backprop : 3.738167 0.000000 3.738167 ( 3.737946)
282
+ opencl conv2d_backprop : 0.031267 0.003958 0.035225 ( 0.030381)
283
+ pure ruby conv2d : 0.794182 0.000000 0.794182 ( 0.794100)
284
+ opencl conv2d : 0.015865 0.004020 0.019885 ( 0.016878)
285
+ ```
286
+
287
+ A quick glance shows not a marginal increase but an order of magnitude performance increase in most operations.
288
+ In fact we are looking at almost a 200x faster compute on operations like matmul and softmax (essential operations in machine learning). This is not a surprise because of the "embarrasingly" parallel nature of machine learning computation. Because of this, GPUs are basically a requirement in most machine learning tasks.
289
+
290
+ The code containing these benchmarks can be found at:
291
+
292
+ tensor_stream-opencl/benchmark/benchmark.rb
293
+
226
294
  Limitations
227
295
  -----------
228
296
 
@@ -37,6 +37,24 @@ module TensorStream
37
37
  end
38
38
  end
39
39
 
40
+ register_op :bias_add do |_context, _tensor, inputs|
41
+ value, bias = inputs
42
+ arr = value.flatten.each_slice(bias.size).map do |slice|
43
+ slice.each_with_index.map { |elem, index| elem + bias[index] }
44
+ end
45
+ TensorShape.reshape(arr, shape_eval(value))
46
+ end
47
+
48
+ register_op :bias_add_grad do |_context, _tensor, inputs|
49
+ received_grad = inputs[0]
50
+ bias_size = shape_eval(received_grad).last
51
+ grad_sum = Array.new(bias_size) { 0.0 }
52
+ received_grad.flatten.each_slice(bias_size) do |slice|
53
+ slice.each_with_index.map { |elem, index| grad_sum[index] += elem }
54
+ end
55
+ grad_sum
56
+ end
57
+
40
58
  register_op :sub, no_eval: true do |context, tensor, inputs|
41
59
  a, b = inputs
42
60
  call_vector_op(tensor, :sub, a, b, context) { |t, u| t - u }
@@ -9,12 +9,12 @@ module TensorStream
9
9
  #
10
10
  # This operation supports broadcasting
11
11
  #
12
- # Params:
13
- # +input_a+:: tensor X
14
- # +input_b+:: tensor Y
12
+ # @param input_a tensor X
13
+ # @param input_b tensor Y
15
14
  #
16
15
  # Options:
17
- # +:name+:: Optional name
16
+ # @option name Optional name
17
+ # @return Tensor
18
18
  def add(input_a, input_b, name: nil)
19
19
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
20
20
  _op(:add, input_a, input_b, name: name)
@@ -25,14 +25,14 @@ module TensorStream
25
25
  # Returns the index with the largest value across axes of a tensor.
26
26
  #
27
27
  #
28
- # Params:
29
- # +input_a+:: tensor X (of type NUMERIC_TYPES)
30
- # +axis+:: Describes which axis of the input tensor to reduce across. For vectors, use axis = 0 (of type INTEGER_TYPES)
28
+ # @param input_a tensor X (of type NUMERIC_TYPES)
29
+ # @param axis Describes which axis of the input tensor to reduce across. For vectors, use axis = 0 (of type INTEGER_TYPES)
31
30
  #
32
31
  # Options:
33
- # +:name+:: Optional name
34
- # +:dimension+:: Same as axis
35
- # +:output_type+:: Output data type defaults to int32 default (:int32)
32
+ # @option name Optional name
33
+ # @option dimension Same as axis
34
+ # @option output_type Output data type defaults to int32 default (:int32)
35
+ # @return Tensor
36
36
  def argmax(input_a, axis = nil, name: nil, dimension: nil, output_type: :int32)
37
37
  check_allowed_types(input_a, TensorStream::Ops::NUMERIC_TYPES)
38
38
  check_allowed_types(axis, TensorStream::Ops::INTEGER_TYPES)
@@ -44,14 +44,14 @@ module TensorStream
44
44
  # Returns the index with the smallest value across axes of a tensor.
45
45
  #
46
46
  #
47
- # Params:
48
- # +input_a+:: tensor X (of type NUMERIC_TYPES)
49
- # +axis+:: Describes which axis of the input tensor to reduce across. For vectors, use axis = 0 (of type INTEGER_TYPES)
47
+ # @param input_a tensor X (of type NUMERIC_TYPES)
48
+ # @param axis Describes which axis of the input tensor to reduce across. For vectors, use axis = 0 (of type INTEGER_TYPES)
50
49
  #
51
50
  # Options:
52
- # +:name+:: Optional name
53
- # +:dimension+:: Same as axis
54
- # +:output_type+:: Output data type defaults to int32 default (:int32)
51
+ # @option name Optional name
52
+ # @option dimension Same as axis
53
+ # @option output_type Output data type defaults to int32 default (:int32)
54
+ # @return Tensor
55
55
  def argmin(input_a, axis = nil, name: nil, dimension: nil, output_type: :int32)
56
56
  check_allowed_types(input_a, TensorStream::Ops::NUMERIC_TYPES)
57
57
  check_allowed_types(axis, TensorStream::Ops::INTEGER_TYPES)
@@ -63,11 +63,11 @@ module TensorStream
63
63
  # Returns element-wise smallest integer in not less than x
64
64
  #
65
65
  #
66
- # Params:
67
- # +input_a+:: tensor X (of type FLOATING_POINT_TYPES)
66
+ # @param input_a tensor X (of type FLOATING_POINT_TYPES)
68
67
  #
69
68
  # Options:
70
- # +:name+:: Optional name
69
+ # @option name Optional name
70
+ # @return Tensor
71
71
  def ceil(input_a, name: nil)
72
72
  check_allowed_types(input_a, TensorStream::Ops::FLOATING_POINT_TYPES)
73
73
  _op(:ceil, input_a, name: name)
@@ -78,11 +78,11 @@ module TensorStream
78
78
  # Computes cos of input element-wise.
79
79
  #
80
80
  #
81
- # Params:
82
- # +input_a+:: tensor X (of type FLOATING_POINT_TYPES)
81
+ # @param input_a tensor X (of type FLOATING_POINT_TYPES)
83
82
  #
84
83
  # Options:
85
- # +:name+:: Optional name
84
+ # @option name Optional name
85
+ # @return Tensor
86
86
  def cos(input_a, name: nil)
87
87
  check_allowed_types(input_a, TensorStream::Ops::FLOATING_POINT_TYPES)
88
88
  _op(:cos, input_a, name: name)
@@ -94,12 +94,12 @@ module TensorStream
94
94
  #
95
95
  # This operation supports broadcasting
96
96
  #
97
- # Params:
98
- # +input_a+:: tensor X
99
- # +input_b+:: tensor Y
97
+ # @param input_a tensor X
98
+ # @param input_b tensor Y
100
99
  #
101
100
  # Options:
102
- # +:name+:: Optional name
101
+ # @option name Optional name
102
+ # @return Tensor
103
103
  def div(input_a, input_b, name: nil)
104
104
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
105
105
  _op(:div, input_a, input_b, name: name)
@@ -111,12 +111,12 @@ module TensorStream
111
111
  #
112
112
  # This operation supports broadcasting
113
113
  #
114
- # Params:
115
- # +input_a+:: tensor X
116
- # +input_b+:: tensor Y
114
+ # @param input_a tensor X
115
+ # @param input_b tensor Y
117
116
  #
118
117
  # Options:
119
- # +:name+:: Optional name
118
+ # @option name Optional name
119
+ # @return Tensor
120
120
  def equal(input_a, input_b, name: nil)
121
121
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
122
122
  _op(:equal, input_a, input_b, name: name)
@@ -129,12 +129,12 @@ module TensorStream
129
129
  # dimension index axis starts at zero; if you specify a negative number for axis it is counted backward from the end.
130
130
  #
131
131
  #
132
- # Params:
133
- # +input+:: A tensor
134
- # +axis+:: Specifies the dimension index at which to expand the shape of input. Must be in the range [-rank(input) - 1, rank(input)].
132
+ # @param input A tensor
133
+ # @param axis Specifies the dimension index at which to expand the shape of input. Must be in the range [-rank(input) - 1, rank(input)].
135
134
  #
136
135
  # Options:
137
- # +:name+:: Optional name
136
+ # @option name Optional name
137
+ # @return Tensor
138
138
  def expand_dims(input, axis, name: nil)
139
139
  _op(:expand_dims, input, axis, name: name)
140
140
  end
@@ -144,12 +144,12 @@ module TensorStream
144
144
  # This operation creates a tensor of shape dims and fills it with value.
145
145
  #
146
146
  #
147
- # Params:
148
- # +dims+:: tensor shape
149
- # +value+:: scalar value to fill with
147
+ # @param dims tensor shape
148
+ # @param value scalar value to fill with
150
149
  #
151
150
  # Options:
152
- # +:name+:: Optional name
151
+ # @option name Optional name
152
+ # @return Tensor
153
153
  def fill(dims, value, name: nil)
154
154
  _op(:fill, dims, value, name: name)
155
155
  end
@@ -159,11 +159,11 @@ module TensorStream
159
159
  # Returns element-wise largest integer not greater than x.
160
160
  #
161
161
  #
162
- # Params:
163
- # +input_a+:: tensor X (of type FLOATING_POINT_TYPES)
162
+ # @param input_a tensor X (of type FLOATING_POINT_TYPES)
164
163
  #
165
164
  # Options:
166
- # +:name+:: Optional name
165
+ # @option name Optional name
166
+ # @return Tensor
167
167
  def floor(input_a, name: nil)
168
168
  check_allowed_types(input_a, TensorStream::Ops::FLOATING_POINT_TYPES)
169
169
  _op(:floor, input_a, name: name)
@@ -175,12 +175,12 @@ module TensorStream
175
175
  #
176
176
  # This operation supports broadcasting
177
177
  #
178
- # Params:
179
- # +input_a+:: tensor X
180
- # +input_b+:: tensor Y
178
+ # @param input_a tensor X
179
+ # @param input_b tensor Y
181
180
  #
182
181
  # Options:
183
- # +:name+:: Optional name
182
+ # @option name Optional name
183
+ # @return Tensor
184
184
  def floor_div(input_a, input_b, name: nil)
185
185
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
186
186
  _op(:floor_div, input_a, input_b, name: name)
@@ -192,12 +192,12 @@ module TensorStream
192
192
  #
193
193
  # This operation supports broadcasting
194
194
  #
195
- # Params:
196
- # +input_a+:: tensor X
197
- # +input_b+:: tensor Y
195
+ # @param input_a tensor X
196
+ # @param input_b tensor Y
198
197
  #
199
198
  # Options:
200
- # +:name+:: Optional name
199
+ # @option name Optional name
200
+ # @return Tensor
201
201
  def greater(input_a, input_b, name: nil)
202
202
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
203
203
  _op(:greater, input_a, input_b, name: name)
@@ -209,29 +209,46 @@ module TensorStream
209
209
  #
210
210
  # This operation supports broadcasting
211
211
  #
212
- # Params:
213
- # +input_a+:: tensor X
214
- # +input_b+:: tensor Y
212
+ # @param input_a tensor X
213
+ # @param input_b tensor Y
215
214
  #
216
215
  # Options:
217
- # +:name+:: Optional name
216
+ # @option name Optional name
217
+ # @return Tensor
218
218
  def greater_equal(input_a, input_b, name: nil)
219
219
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
220
220
  _op(:greater_equal, input_a, input_b, name: name)
221
221
  end
222
222
 
223
223
 
224
+ ##
225
+ # Returns the truth value of (x < y) element-wise.
226
+ #
227
+ # This operation supports broadcasting
228
+ #
229
+ # @param input_a tensor X
230
+ # @param input_b tensor Y
231
+ #
232
+ # Options:
233
+ # @option name Optional name
234
+ # @return Tensor
235
+ def less(input_a, input_b, name: nil)
236
+ input_a, input_b = apply_data_type_coercion(input_a, input_b)
237
+ _op(:less, input_a, input_b, name: name)
238
+ end
239
+
240
+
224
241
  ##
225
242
  # Returns the truth value of (x <= y) element-wise.
226
243
  #
227
244
  # This operation supports broadcasting
228
245
  #
229
- # Params:
230
- # +input_a+:: tensor X
231
- # +input_b+:: tensor Y
246
+ # @param input_a tensor X
247
+ # @param input_b tensor Y
232
248
  #
233
249
  # Options:
234
- # +:name+:: Optional name
250
+ # @option name Optional name
251
+ # @return Tensor
235
252
  def less_equal(input_a, input_b, name: nil)
236
253
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
237
254
  _op(:less_equal, input_a, input_b, name: name)
@@ -242,11 +259,11 @@ module TensorStream
242
259
  # Computes natural logarithm of x element-wise.
243
260
  #
244
261
  #
245
- # Params:
246
- # +input+:: tensor X
262
+ # @param input tensor X
247
263
  #
248
264
  # Options:
249
- # +:name+:: Optional name
265
+ # @option name Optional name
266
+ # @return Tensor
250
267
  def log(input, name: nil)
251
268
  _op(:log, input, name: name)
252
269
  end
@@ -257,14 +274,14 @@ module TensorStream
257
274
  #
258
275
  # This operation supports broadcasting
259
276
  #
260
- # Params:
261
- # +input_a+:: tensor X
262
- # +input_b+:: tensor Y
277
+ # @param input_a tensor X
278
+ # @param input_b tensor Y
263
279
  #
264
280
  # Options:
265
- # +:transpose_a+:: Transpose matrix A first default (false)
266
- # +:transpose_b+:: Transpose matrix B first default (false)
267
- # +:name+:: Optional name
281
+ # @option transpose_a Transpose matrix A first default (false)
282
+ # @option transpose_b Transpose matrix B first default (false)
283
+ # @option name Optional name
284
+ # @return Tensor
268
285
  def mat_mul(input_a, input_b, transpose_a: false, transpose_b: false, name: nil)
269
286
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
270
287
  _op(:mat_mul, input_a, input_b, transpose_a: transpose_a, transpose_b: transpose_b, name: name)
@@ -277,12 +294,12 @@ module TensorStream
277
294
  #
278
295
  # This operation supports broadcasting
279
296
  #
280
- # Params:
281
- # +input_a+:: tensor X (of type NUMERIC_TYPES)
282
- # +input_b+:: tensor Y (of type NUMERIC_TYPES)
297
+ # @param input_a tensor X (of type NUMERIC_TYPES)
298
+ # @param input_b tensor Y (of type NUMERIC_TYPES)
283
299
  #
284
300
  # Options:
285
- # +:name+:: Optional name
301
+ # @option name Optional name
302
+ # @return Tensor
286
303
  def max(input_a, input_b, name: nil)
287
304
  check_allowed_types(input_a, TensorStream::Ops::NUMERIC_TYPES)
288
305
  check_allowed_types(input_b, TensorStream::Ops::NUMERIC_TYPES)
@@ -296,12 +313,12 @@ module TensorStream
296
313
  #
297
314
  # This operation supports broadcasting
298
315
  #
299
- # Params:
300
- # +input_a+:: tensor X (of type NUMERIC_TYPES)
301
- # +input_b+:: tensor Y (of type NUMERIC_TYPES)
316
+ # @param input_a tensor X (of type NUMERIC_TYPES)
317
+ # @param input_b tensor Y (of type NUMERIC_TYPES)
302
318
  #
303
319
  # Options:
304
- # +:name+:: Optional name
320
+ # @option name Optional name
321
+ # @return Tensor
305
322
  def min(input_a, input_b, name: nil)
306
323
  check_allowed_types(input_a, TensorStream::Ops::NUMERIC_TYPES)
307
324
  check_allowed_types(input_b, TensorStream::Ops::NUMERIC_TYPES)
@@ -315,12 +332,12 @@ module TensorStream
315
332
  #
316
333
  # This operation supports broadcasting
317
334
  #
318
- # Params:
319
- # +input_a+:: tensor X
320
- # +input_b+:: tensor Y
335
+ # @param input_a tensor X
336
+ # @param input_b tensor Y
321
337
  #
322
338
  # Options:
323
- # +:name+:: Optional name
339
+ # @option name Optional name
340
+ # @return Tensor
324
341
  def mod(input_a, input_b, name: nil)
325
342
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
326
343
  _op(:mod, input_a, input_b, name: name)
@@ -332,12 +349,12 @@ module TensorStream
332
349
  #
333
350
  # This operation supports broadcasting
334
351
  #
335
- # Params:
336
- # +input_a+:: tensor X
337
- # +input_b+:: tensor Y
352
+ # @param input_a tensor X
353
+ # @param input_b tensor Y
338
354
  #
339
355
  # Options:
340
- # +:name+:: Optional name
356
+ # @option name Optional name
357
+ # @return Tensor
341
358
  def mul(input_a, input_b, name: nil)
342
359
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
343
360
  _op(:mul, input_a, input_b, name: name)
@@ -348,16 +365,33 @@ module TensorStream
348
365
  # Computes numerical negative value element-wise.
349
366
  #
350
367
  #
351
- # Params:
352
- # +input+:: tensor X
368
+ # @param input tensor X
353
369
  #
354
370
  # Options:
355
- # +:name+:: Optional name
371
+ # @option name Optional name
372
+ # @return Tensor
356
373
  def negate(input, name: nil)
357
374
  _op(:negate, input, name: name)
358
375
  end
359
376
 
360
377
 
378
+ ##
379
+ # Returns the truth value of (x != y) element-wise.
380
+ #
381
+ # This operation supports broadcasting
382
+ #
383
+ # @param input_a tensor X
384
+ # @param input_b tensor Y
385
+ #
386
+ # Options:
387
+ # @option name Optional name
388
+ # @return Tensor
389
+ def not_equal(input_a, input_b, name: nil)
390
+ input_a, input_b = apply_data_type_coercion(input_a, input_b)
391
+ _op(:not_equal, input_a, input_b, name: name)
392
+ end
393
+
394
+
361
395
  ##
362
396
  # Creates a tensor with all elements set to 1.
363
397
  # Given a single tensor (tensor), this operation returns a
@@ -365,12 +399,12 @@ module TensorStream
365
399
  # Optionally, you can specify a new type (dtype) for the returned tensor.
366
400
  #
367
401
  #
368
- # Params:
369
- # +input+:: A tensor
402
+ # @param input A tensor
370
403
  #
371
404
  # Options:
372
- # +:dtype+:: Optional new data type to cast into
373
- # +:name+:: Optional name
405
+ # @option dtype Optional new data type to cast into
406
+ # @option name Optional name
407
+ # @return Tensor
374
408
  def ones_like(input, dtype: nil, name: nil)
375
409
  _op(:ones_like, input, data_type: dtype, name: name)
376
410
  end
@@ -381,12 +415,12 @@ module TensorStream
381
415
  #
382
416
  # This operation supports broadcasting
383
417
  #
384
- # Params:
385
- # +input_a+:: tensor X
386
- # +input_b+:: tensor Y
418
+ # @param input_a tensor X
419
+ # @param input_b tensor Y
387
420
  #
388
421
  # Options:
389
- # +:name+:: Optional name
422
+ # @option name Optional name
423
+ # @return Tensor
390
424
  def pow(input_a, input_b, name: nil)
391
425
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
392
426
  _op(:pow, input_a, input_b, name: name)
@@ -401,13 +435,13 @@ module TensorStream
401
435
  # If axis has no entries, all dimensions are reduced, and a tensor with a single element is returned.
402
436
  #
403
437
  #
404
- # Params:
405
- # +input_a+:: tensor X
406
- # +axis+:: tensor X (of type INTEGER_TYPES)
438
+ # @param input_a tensor X
439
+ # @param axis tensor X (of type INTEGER_TYPES)
407
440
  #
408
441
  # Options:
409
- # +:name+:: Optional name
410
- # +:keepdims+:: If true, retains reduced dimensions with length 1. default (false)
442
+ # @option name Optional name
443
+ # @option keepdims If true, retains reduced dimensions with length 1. default (false)
444
+ # @return Tensor
411
445
  def prod(input_a, axis = nil, name: nil, keepdims: false)
412
446
  check_allowed_types(axis, TensorStream::Ops::INTEGER_TYPES)
413
447
  input_a = TensorStream.convert_to_tensor(input_a)
@@ -422,15 +456,15 @@ module TensorStream
422
456
  # Outputs random values from a uniform distribution.
423
457
  #
424
458
  #
425
- # Params:
426
- # +shape+:: A 1-D integer Tensor or array. The shape of the output tensor.
459
+ # @param shape A 1-D integer Tensor or array. The shape of the output tensor.
427
460
  #
428
461
  # Options:
429
- # +:name+:: Optional name
430
- # +:dtype+:: The type of the output: float16, float32, float64, int32, or int64 default (:float32)
431
- # +:minval+:: A 0-D Tensor or ruby value of type dtype. The lower bound on the range of random values to generate. Defaults to 0. default (0)
432
- # +:maxval+:: A 0-D Tensor or ruby value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point. default (1)
433
- # +:seed+:: A ruby integer. Used to create a random seed for the distribution. See set_random_seed for behavior.
462
+ # @option name Optional name
463
+ # @option dtype The type of the output: float16, float32, float64, int32, or int64 default (:float32)
464
+ # @option minval A 0-D Tensor or ruby value of type dtype. The lower bound on the range of random values to generate. Defaults to 0. default (0)
465
+ # @option maxval A 0-D Tensor or ruby value of type dtype. The upper bound on the range of random values to generate. Defaults to 1 if dtype is floating point. default (1)
466
+ # @option seed A ruby integer. Used to create a random seed for the distribution. See set_random_seed for behavior.
467
+ # @return Tensor
434
468
  def random_uniform(shape, name: nil, dtype: :float32, minval: 0, maxval: 1, seed: nil)
435
469
  _op(:random_uniform, shape, name: name, dtype: dtype, minval: minval, maxval: maxval, seed: seed)
436
470
  end
@@ -441,15 +475,15 @@ module TensorStream
441
475
  # Creates a sequence of numbers that begins at start and extends by increments of delta up to but not including limit.
442
476
  #
443
477
  #
444
- # Params:
445
- # +start+:: Acts as first entry in the range if limit is not nil; otherwise, acts as range limit and first entry defaults to 0.
446
- # +limit+:: Upper limit of sequence, exclusive. If nil, defaults to the value of start while the first entry of the range defaults to 0.
447
- # +delta+:: Number that increments start. Defaults to 1.
478
+ # @param start Acts as first entry in the range if limit is not nil; otherwise, acts as range limit and first entry defaults to 0.
479
+ # @param limit Upper limit of sequence, exclusive. If nil, defaults to the value of start while the first entry of the range defaults to 0.
480
+ # @param delta Number that increments start. Defaults to 1.
448
481
  #
449
482
  # Options:
450
- # +:name+:: A name for the operation. Defaults to "range". default ("range")
451
- # +:dtype+:: The type of the elements of the resulting tensor.
452
- # +:output_type+:: Output data type defaults to int32 default (:int32)
483
+ # @option name A name for the operation. Defaults to "range". default ("range")
484
+ # @option dtype The type of the elements of the resulting tensor.
485
+ # @option output_type Output data type defaults to int32 default (:int32)
486
+ # @return Tensor
453
487
  def range(start = 0, limit = 0, delta = 1, name: "range", dtype: nil, output_type: :int32)
454
488
  _op(:range, start, limit, delta, name: name, dtype: dtype, output_type: output_type)
455
489
  end
@@ -459,11 +493,11 @@ module TensorStream
459
493
  # Returns the rank of a tensor
460
494
  #
461
495
  #
462
- # Params:
463
- # +input+:: A tensor
496
+ # @param input A tensor
464
497
  #
465
498
  # Options:
466
- # +:name+:: Optional name
499
+ # @option name Optional name
500
+ # @return Tensor
467
501
  def rank(input, name: nil)
468
502
  input = convert_to_tensor(input)
469
503
  return cons(input.shape.ndims) if input.shape.known?
@@ -476,12 +510,12 @@ module TensorStream
476
510
  # Given tensor, this operation returns a tensor that has the same values as tensor with shape shape.
477
511
  #
478
512
  #
479
- # Params:
480
- # +input+:: A tensor
481
- # +shape+:: A new tensor shape
513
+ # @param input A tensor
514
+ # @param shape A new tensor shape
482
515
  #
483
516
  # Options:
484
- # +:name+:: Optional name
517
+ # @option name Optional name
518
+ # @return Tensor
485
519
  def reshape(input, shape, name: nil)
486
520
  _op(:reshape, input, shape, name: name)
487
521
  end
@@ -491,11 +525,11 @@ module TensorStream
491
525
  # Rounds the values of a tensor to the nearest integer, element-wise
492
526
  #
493
527
  #
494
- # Params:
495
- # +input_a+:: tensor X (of type FLOATING_POINT_TYPES)
528
+ # @param input_a tensor X (of type FLOATING_POINT_TYPES)
496
529
  #
497
530
  # Options:
498
- # +:name+:: Optional name
531
+ # @option name Optional name
532
+ # @return Tensor
499
533
  def round(input_a, name: nil)
500
534
  check_allowed_types(input_a, TensorStream::Ops::FLOATING_POINT_TYPES)
501
535
  _op(:round, input_a, name: name)
@@ -506,12 +540,12 @@ module TensorStream
506
540
  # This operation returns a 1-D integer tensor representing the shape of input
507
541
  #
508
542
  #
509
- # Params:
510
- # +input+:: A tensor
543
+ # @param input A tensor
511
544
  #
512
545
  # Options:
513
- # +:name+:: Optional name
514
- # +:out_type+:: Optional output type default (:int32)
546
+ # @option name Optional name
547
+ # @option out_type Optional output type default (:int32)
548
+ # @return Tensor
515
549
  def shape(input, name: nil, out_type: :int32)
516
550
  return constant(shape_eval(input, out_type), dtype: out_type, name: "Shape/#{name}") if input.is_a?(Array) && !input[0].is_a?(Tensor)
517
551
  return constant(input.shape.shape, dtype: out_type, name: "Shape/#{input.name}_c") if shape_full_specified(input)
@@ -523,11 +557,11 @@ module TensorStream
523
557
  # Computes sigmoid of x element-wise.
524
558
  #
525
559
  #
526
- # Params:
527
- # +input_a+:: tensor X (of type FLOATING_POINT_TYPES)
560
+ # @param input_a tensor X (of type FLOATING_POINT_TYPES)
528
561
  #
529
562
  # Options:
530
- # +:name+:: Optional name
563
+ # @option name Optional name
564
+ # @return Tensor
531
565
  def sigmoid(input_a, name: nil)
532
566
  check_allowed_types(input_a, TensorStream::Ops::FLOATING_POINT_TYPES)
533
567
  _op(:sigmoid, input_a, name: name)
@@ -540,11 +574,11 @@ module TensorStream
540
574
  # Zero is returned for NaN inputs.
541
575
  #
542
576
  #
543
- # Params:
544
- # +input_a+:: tensor X
577
+ # @param input_a tensor X
545
578
  #
546
579
  # Options:
547
- # +:name+:: Optional name
580
+ # @option name Optional name
581
+ # @return Tensor
548
582
  def sign(input_a, name: nil)
549
583
  _op(:sign, input_a, name: name)
550
584
  end
@@ -554,11 +588,11 @@ module TensorStream
554
588
  # Computes sin of input element-wise.
555
589
  #
556
590
  #
557
- # Params:
558
- # +input_a+:: tensor X (of type FLOATING_POINT_TYPES)
591
+ # @param input_a tensor X (of type FLOATING_POINT_TYPES)
559
592
  #
560
593
  # Options:
561
- # +:name+:: Optional name
594
+ # @option name Optional name
595
+ # @return Tensor
562
596
  def sin(input_a, name: nil)
563
597
  check_allowed_types(input_a, TensorStream::Ops::FLOATING_POINT_TYPES)
564
598
  _op(:sin, input_a, name: name)
@@ -570,12 +604,12 @@ module TensorStream
570
604
  # Returns a 0-D Tensor representing the number of elements in input of type out_type. Defaults to :int32.
571
605
  #
572
606
  #
573
- # Params:
574
- # +input+:: A tensor
607
+ # @param input A tensor
575
608
  #
576
609
  # Options:
577
- # +:name+:: Optional name
578
- # +:out_type+:: Optional output type default (:int32)
610
+ # @option name Optional name
611
+ # @option out_type Optional output type default (:int32)
612
+ # @return Tensor
579
613
  def size(input, name: nil, out_type: :int32)
580
614
  _op(:size, input, name: name, out_type: out_type)
581
615
  end
@@ -586,12 +620,12 @@ module TensorStream
586
620
  #
587
621
  # This operation supports broadcasting
588
622
  #
589
- # Params:
590
- # +input_a+:: tensor X
591
- # +input_b+:: tensor Y
623
+ # @param input_a tensor X
624
+ # @param input_b tensor Y
592
625
  #
593
626
  # Options:
594
- # +:name+:: Optional name
627
+ # @option name Optional name
628
+ # @return Tensor
595
629
  def sub(input_a, input_b, name: nil)
596
630
  input_a, input_b = apply_data_type_coercion(input_a, input_b)
597
631
  _op(:sub, input_a, input_b, name: name)
@@ -607,13 +641,13 @@ module TensorStream
607
641
  # If axis has no entries, all dimensions are reduced, and a tensor with a single element is returned.
608
642
  #
609
643
  #
610
- # Params:
611
- # +input_a+:: tensor X
612
- # +axis+:: tensor X (of type INTEGER_TYPES)
644
+ # @param input_a tensor X
645
+ # @param axis tensor X (of type INTEGER_TYPES)
613
646
  #
614
647
  # Options:
615
- # +:name+:: Optional name
616
- # +:keepdims+:: If true, retains reduced dimensions with length 1. default (false)
648
+ # @option name Optional name
649
+ # @option keepdims If true, retains reduced dimensions with length 1. default (false)
650
+ # @return Tensor
617
651
  def sum(input_a, axis = nil, name: nil, keepdims: false)
618
652
  check_allowed_types(axis, TensorStream::Ops::INTEGER_TYPES)
619
653
  input_a = TensorStream.convert_to_tensor(input_a)
@@ -628,11 +662,11 @@ module TensorStream
628
662
  # Computes tan of input element-wise.
629
663
  #
630
664
  #
631
- # Params:
632
- # +input_a+:: tensor X (of type FLOATING_POINT_TYPES)
665
+ # @param input_a tensor X (of type FLOATING_POINT_TYPES)
633
666
  #
634
667
  # Options:
635
- # +:name+:: Optional name
668
+ # @option name Optional name
669
+ # @return Tensor
636
670
  def tan(input_a, name: nil)
637
671
  check_allowed_types(input_a, TensorStream::Ops::FLOATING_POINT_TYPES)
638
672
  _op(:tan, input_a, name: name)
@@ -643,11 +677,11 @@ module TensorStream
643
677
  # Computes tanh of input element-wise.
644
678
  #
645
679
  #
646
- # Params:
647
- # +input_a+:: tensor X (of type FLOATING_POINT_TYPES)
680
+ # @param input_a tensor X (of type FLOATING_POINT_TYPES)
648
681
  #
649
682
  # Options:
650
- # +:name+:: Optional name
683
+ # @option name Optional name
684
+ # @return Tensor
651
685
  def tanh(input_a, name: nil)
652
686
  check_allowed_types(input_a, TensorStream::Ops::FLOATING_POINT_TYPES)
653
687
  _op(:tanh, input_a, name: name)
@@ -661,12 +695,12 @@ module TensorStream
661
695
  # and the values of input are replicated multiples[i] times along the 'i'th dimension. For example, tiling [a b c d] by [2] produces [a b c d a b c d].
662
696
  #
663
697
  #
664
- # Params:
665
- # +input+:: A tensor
666
- # +multiples+:: Must be one of the following types: int32, int64. 1-D. Length must be the same as the number of dimensions in input
698
+ # @param input A tensor
699
+ # @param multiples Must be one of the following types: int32, int64. 1-D. Length must be the same as the number of dimensions in input
667
700
  #
668
701
  # Options:
669
- # +:name+:: Optional name
702
+ # @option name Optional name
703
+ # @return Tensor
670
704
  def tile(input, multiples, name: nil)
671
705
  _op(:tile, input, multiples, name: name)
672
706
  end
@@ -676,12 +710,12 @@ module TensorStream
676
710
  # Creates a tensor with all elements set to zero
677
711
  #
678
712
  #
679
- # Params:
680
- # +shape+:: A 1-D integer Tensor or ruby array. The shape of the output tensor.
713
+ # @param shape A 1-D integer Tensor or ruby array. The shape of the output tensor.
681
714
  #
682
715
  # Options:
683
- # +:dtype+:: Optional name default (:float32)
684
- # +:name+:: Optional name
716
+ # @option dtype Optional name default (:float32)
717
+ # @option name Optional name
718
+ # @return Tensor
685
719
  def zeros(shape, dtype: :float32, name: nil)
686
720
  _op(:zeros, shape, dtype: dtype, name: name)
687
721
  end
@@ -9,12 +9,12 @@ module TensorStream
9
9
  <%end%> #
10
10
  #<% if op.supports_broadcasting? %> This operation supports broadcasting
11
11
  #<% end %>
12
- # Params:
13
- <% op.parameters.each do |param| %> # +<%= param[:name] %>+:: <%= param[:description]%><%if param[:validate]%> (of type <%= param[:validate] %>)<%end%>
12
+ <% op.parameters.each do |param| %> # @param <%= param[:name] %> <%= param[:description]%><%if param[:validate]%> (of type <%= param[:validate] %>)<%end%>
14
13
  <% end %> #
15
14
  # Options:
16
- <% op.options.each do |k, v| %> # +:<%= k %>+:: <%= v[:description]%><% if v[:default_value] != :nil %> default (<%= v[:default_value] %>)<%end%>
17
- <%end%> def <%= op.operation.to_s %>(<%= (op.expand_params(true) + op.expand_options(true)).join(', ') %>)
15
+ <% op.options.each do |k, v| %> # @option <%= k %> <%= v[:description]%><% if v[:default_value] != :nil %> default (<%= v[:default_value] %>)<%end%>
16
+ <%end%> # @return Tensor
17
+ def <%= op.operation.to_s %>(<%= (op.expand_params(true) + op.expand_options(true)).join(', ') %>)
18
18
  <%= op.generate_body %>
19
19
  end
20
20
  <% op.aliases.each do |a|%>
@@ -137,6 +137,19 @@ module TensorStream
137
137
  def conv2d(input, filter, strides, padding, name: nil)
138
138
  _op(:conv2d, input, filter, strides: strides, padding: padding, name: name)
139
139
  end
140
+
141
+ ##
142
+ # Adds bias to value.
143
+ #
144
+ # This is a narrow version of tf add where the bias is restructed to 1-D only
145
+ def bias_add(value, bias, data_format: nil, name: nil)
146
+ value = TensorStream.convert_to_tensor(value, name: "input")
147
+ bias = TensorStream.convert_to_tensor(bias, dtype: value.dtype, name: "bias")
148
+
149
+ raise TensorStreamError, "value must be at least rank 2" if value.shape.known? && value.shape.ndims < 2
150
+
151
+ _op(:bias_add, value, bias, data_format: data_format, name: name)
152
+ end
140
153
  end
141
154
  end
142
155
 
@@ -2,7 +2,8 @@ class TensorStream::OpMaker
2
2
  attr_reader :operation, :description, :parameters,
3
3
  :options, :gradient, :check_types,
4
4
  :supports_broadcast, :data_type_coercion,
5
- :aliases, :custom, :infer_type_proc, :exclude
5
+ :aliases, :custom, :infer_type_proc, :exclude,
6
+ :data_type_block
6
7
 
7
8
  def initialize(op)
8
9
  @operation = op
@@ -58,6 +59,22 @@ class TensorStream::OpMaker
58
59
  context_caller.instance_exec(tensor, &@ops[tensor.operation].infer_type_proc)
59
60
  end
60
61
 
62
+ def self.infer_data_type(context_caller, tensor, passed_data_type)
63
+ return passed_data_type if passed_data_type
64
+
65
+ if @ops[tensor.operation] && @ops[tensor.operation].data_type_block
66
+ context_caller.instance_exec(tensor, &@ops[tensor.operation].data_type_block)
67
+ else
68
+ if tensor.inputs[0]
69
+ tensor.inputs[0].data_type
70
+ elsif tensor.inputs[1]
71
+ tensor.inputs[1].data_type
72
+ else
73
+ :unknown
74
+ end
75
+ end
76
+ end
77
+
61
78
  def self.each_op(&block)
62
79
  @ops.values.sort_by { |op| op.operation }.reject(&:exclude).each do |op|
63
80
  block.call(op)
@@ -122,6 +139,10 @@ class TensorStream::OpMaker
122
139
  @infer_type_proc = block
123
140
  end
124
141
 
142
+ def define_data_type(&block)
143
+ @data_type_block = block
144
+ end
145
+
125
146
  def expand_params(print_defaults)
126
147
  @parameters.map { |param|
127
148
  print_defaults && param[:default_value] ? "#{param[:name]} = #{default_with_nil(param[:default_value])}" : "#{param[:name]}"
@@ -96,7 +96,7 @@ module TensorStream
96
96
  options[:data_type]
97
97
  when :fill
98
98
  @inputs[1].data_type
99
- when :greater, :less, :equal, :not_equal, :greater_equal, :less_equal, :logical_and
99
+ when :logical_and
100
100
  :boolean
101
101
  when :shape, :rank, :shape_n
102
102
  options[:out_type] || :int32
@@ -119,15 +119,7 @@ module TensorStream
119
119
  @inputs[0].data_type
120
120
  end
121
121
  else
122
- return passed_data_type if passed_data_type
123
-
124
- if @inputs[0]
125
- @inputs[0].data_type
126
- elsif @inputs[1]
127
- @inputs[1].data_type
128
- else
129
- :unknown
130
- end
122
+ OpMaker.infer_data_type(self, self, passed_data_type)
131
123
  end
132
124
  end
133
125
 
@@ -163,14 +163,6 @@ module TensorStream
163
163
  _op(:ones, shape, data_type: dtype, name: name)
164
164
  end
165
165
 
166
- ##
167
- # Returns the truth value of (x < y) element-wise.
168
- # This operation supports broadcasting
169
- def less(input_a, input_b, name: nil)
170
- check_data_types(input_a, input_b)
171
- _op(:less, input_a, input_b, name: name)
172
- end
173
-
174
166
  ##
175
167
  # Returns the truth value of x AND y element-wise.
176
168
  def logical_and(input_a, input_b, name: nil)
@@ -0,0 +1,16 @@
1
+ TensorStream::OpMaker.define_operation :bias_add do |op|
2
+ op.what_it_does "Adds bias to value."
3
+
4
+ op.parameter :value, "A Tensor", :nil, validate: 'NUMERIC_TYPES'
5
+ op.parameter :bias, "A 1 D tensor", :nil, validate: 'NUMERIC_TYPES'
6
+
7
+ op.supports_broadcasting!
8
+ op.exclude!
9
+
10
+ op.option :name, "Optional name", :nil
11
+ op.option :data_format, "A string. 'NHWC' and 'NCHW' are supported.", :nil
12
+
13
+ op.define_gradient do |grad, node, _params|
14
+ [grad, _op(:bias_add_grad, grad, data_format: node.options[:data_format])]
15
+ end
16
+ end
@@ -12,4 +12,8 @@ TensorStream::OpMaker.define_operation :equal do |op|
12
12
  op.define_gradient do |grad, node, params|
13
13
  _min_or_max_grad(node.inputs, grad, ->(a, b) { ts.equal(a, b) })
14
14
  end
15
+
16
+ op.define_data_type do
17
+ :boolean
18
+ end
15
19
  end
@@ -8,4 +8,8 @@ TensorStream::OpMaker.define_operation :greater do |op|
8
8
  op.supports_broadcasting!
9
9
 
10
10
  op.option :name, "Optional name", :nil
11
+
12
+ op.define_data_type do
13
+ :boolean
14
+ end
11
15
  end
@@ -8,4 +8,8 @@ TensorStream::OpMaker.define_operation :greater_equal do |op|
8
8
  op.supports_broadcasting!
9
9
 
10
10
  op.option :name, "Optional name", :nil
11
+
12
+ op.define_data_type do
13
+ :boolean
14
+ end
11
15
  end
@@ -0,0 +1,19 @@
1
+ TensorStream::OpMaker.define_operation :less do |op|
2
+ op.what_it_does "Returns the truth value of (x < y) element-wise."
3
+
4
+ op.parameter :input_a, "tensor X"
5
+ op.parameter :input_b, "tensor Y"
6
+
7
+ op.apply_data_type_coercion!
8
+ op.supports_broadcasting!
9
+
10
+ op.option :name, "Optional name", :nil
11
+
12
+ op.define_gradient do |grad, node, _params|
13
+ _min_or_max_grad(node.inputs, grad, ->(a, b) { ts.less(a, b) })
14
+ end
15
+
16
+ op.define_data_type do
17
+ :boolean
18
+ end
19
+ end
@@ -12,4 +12,8 @@ TensorStream::OpMaker.define_operation :less_equal do |op|
12
12
  op.define_gradient do |grad, node, params|
13
13
  _min_or_max_grad(node.inputs, grad, ->(a, b) { ts.greater_equal(a, b) })
14
14
  end
15
+
16
+ op.define_data_type do
17
+ :boolean
18
+ end
15
19
  end
@@ -0,0 +1,19 @@
1
+ TensorStream::OpMaker.define_operation :not_equal do |op|
2
+ op.what_it_does "Returns the truth value of (x != y) element-wise."
3
+
4
+ op.parameter :input_a, "tensor X"
5
+ op.parameter :input_b, "tensor Y"
6
+
7
+ op.apply_data_type_coercion!
8
+ op.supports_broadcasting!
9
+
10
+ op.option :name, "Optional name", :nil
11
+
12
+ op.define_gradient do |grad, node, params|
13
+ _min_or_max_grad(node.inputs, grad, ->(a, b) { ts.not_equal(a, b) })
14
+ end
15
+
16
+ op.define_data_type do
17
+ :boolean
18
+ end
19
+ end
@@ -1,5 +1,5 @@
1
1
  module TensorStream
2
- VERSION = "1.0.5".freeze
2
+ VERSION = "1.0.6".freeze
3
3
 
4
4
  def self.version
5
5
  VERSION
@@ -0,0 +1,22 @@
1
+ # A ruby port of the example code discussed by Martin Gorner in
2
+ # "TensorFlow and Deep Learning without a PhD, Part 1 (Google Cloud Next '17)""
3
+ #
4
+ # https://www.youtube.com/watch?v=u4alGiomYP4
5
+ #
6
+ # Requirements:
7
+ # mnist-learn gem
8
+ # opencl_ruby_ffi gem
9
+ require "bundler/setup"
10
+ require "tensor_stream"
11
+ require "mnist-learn"
12
+
13
+ # Enable OpenCL hardware accelerated computation, not using OpenCL can be very slow
14
+ # gem install tensor_stream-opencl
15
+ require 'tensor_stream/opencl'
16
+
17
+ tf = TensorStream
18
+
19
+ # Import MNIST data
20
+ puts "downloading minst data"
21
+ mnist = Mnist.read_data_sets("/tmp/data", one_hot: true)
22
+ puts "downloading finished"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: tensor_stream
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.5
4
+ version: 1.0.6
5
5
  platform: ruby
6
6
  authors:
7
7
  - Joseph Emmanuel Dayo
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2019-03-20 00:00:00.000000000 Z
11
+ date: 2019-03-23 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -317,6 +317,7 @@ files:
317
317
  - lib/tensor_stream/ops/add.rb
318
318
  - lib/tensor_stream/ops/argmax.rb
319
319
  - lib/tensor_stream/ops/argmin.rb
320
+ - lib/tensor_stream/ops/bias_add.rb
320
321
  - lib/tensor_stream/ops/case.rb
321
322
  - lib/tensor_stream/ops/cast.rb
322
323
  - lib/tensor_stream/ops/ceil.rb
@@ -330,6 +331,7 @@ files:
330
331
  - lib/tensor_stream/ops/floor_div.rb
331
332
  - lib/tensor_stream/ops/greater.rb
332
333
  - lib/tensor_stream/ops/greater_equal.rb
334
+ - lib/tensor_stream/ops/less.rb
333
335
  - lib/tensor_stream/ops/less_equal.rb
334
336
  - lib/tensor_stream/ops/log.rb
335
337
  - lib/tensor_stream/ops/mat_mul.rb
@@ -338,6 +340,7 @@ files:
338
340
  - lib/tensor_stream/ops/mod.rb
339
341
  - lib/tensor_stream/ops/mul.rb
340
342
  - lib/tensor_stream/ops/negate.rb
343
+ - lib/tensor_stream/ops/not_equal.rb
341
344
  - lib/tensor_stream/ops/ones_like.rb
342
345
  - lib/tensor_stream/ops/pow.rb
343
346
  - lib/tensor_stream/ops/prod.rb
@@ -384,6 +387,7 @@ files:
384
387
  - samples/datasets/iris.data
385
388
  - samples/jupyter_notebooks/linear_regression.ipynb
386
389
  - samples/neural_networks/iris.rb
390
+ - samples/neural_networks/lstm.rb
387
391
  - samples/neural_networks/mnist_data.rb
388
392
  - samples/neural_networks/raw_neural_net_sample.rb
389
393
  - samples/neural_networks/rnn.rb