telekinesis 2.0.1-java → 3.0.0-java
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +27 -24
- data/ext/pom.xml +1 -1
- data/ext/src/main/java/com/kickstarter/jruby/Telekinesis.java +35 -23
- data/lib/telekinesis/consumer/base_processor.rb +3 -3
- data/lib/telekinesis/consumer/block.rb +5 -3
- data/lib/telekinesis/consumer/{distributed_consumer.rb → kcl.rb} +31 -25
- data/lib/telekinesis/consumer.rb +1 -1
- data/lib/telekinesis/{telekinesis-2.0.1.jar → telekinesis-3.0.0.jar} +0 -0
- data/lib/telekinesis/version.rb +1 -1
- data/telekinesis.gemspec +0 -1
- metadata +4 -19
- data/lib/telekinesis/aws/ruby_client_adapter.rb +0 -40
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA1:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: b4fa22c623dd098fff1cefccc7b48d9c543858b3
|
4
|
+
data.tar.gz: b3063eaed5976b9744ecc6eb13fa0fa05c42a184
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 58272a8819dae59636c4c0827da20d244675d2a0404795d2cb33a724643cd31c195c0a692623545483563d7a920257ea8536e0496e8f8b0d938c4fba824c0752
|
7
|
+
data.tar.gz: 07ee614c9d050cd3b37e6d0a4d038dc3efea8e86704d4a2516b9c91ceb0c7efcbfc62d700f84b8f9234385fbd9152411253f0b286dd4c7f0c7470b4bd6627542
|
data/README.md
CHANGED
@@ -7,7 +7,7 @@
|
|
7
7
|
- [SyncProducer](#syncproducer)
|
8
8
|
- [AsyncProducer](#asyncproducer)
|
9
9
|
- [Consumers](#consumers)
|
10
|
-
- [
|
10
|
+
- [KCL](#kcl)
|
11
11
|
- [Client State](#client-state)
|
12
12
|
- [Errors while processing records](#errors-while-processing-records)
|
13
13
|
- [Checkpoints and `INITIAL_POSITION_IN_STREAM`](#checkpoints-and-initial_position_in_stream)
|
@@ -167,15 +167,15 @@ producer = Telekinesis::Producer::AsyncProducer.create(
|
|
167
167
|
|
168
168
|
## Consumers
|
169
169
|
|
170
|
-
###
|
170
|
+
### KCL
|
171
171
|
|
172
|
-
|
173
|
-
(also called the KCL)](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-app.html#kinesis-record-processor-overview-kcl).
|
172
|
+
`Telekinesis::Consumer::KCL` is a wrapper around Amazon's [Kinesis Client
|
173
|
+
Library (also called the KCL)](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-app.html#kinesis-record-processor-overview-kcl).
|
174
174
|
|
175
|
-
Each
|
176
|
-
|
177
|
-
Consumers identify themself uniquely within an
|
178
|
-
`worker_id`.
|
175
|
+
Each KCL instance is part of a group of consumers that make up an
|
176
|
+
_application_. An application can be running on any number of hosts in any
|
177
|
+
number of processes. Consumers identify themself uniquely within an
|
178
|
+
application by specifying a `worker_id`.
|
179
179
|
|
180
180
|
All of the consumers within an application attempt to distribute work evenly
|
181
181
|
between themselves by coordinating through a DynamoDB table. This coordination
|
@@ -183,12 +183,13 @@ ensures that a single consumer processes each shard, and that if one consumer
|
|
183
183
|
fails for any reason, another consumer can pick up from the point at which it
|
184
184
|
last checkpointed.
|
185
185
|
|
186
|
-
This is all part of the
|
186
|
+
This is all part of the official AWS library! Telekinesis just makes it easier
|
187
|
+
to use from JRuby.
|
187
188
|
|
188
|
-
Each
|
189
|
+
Each client has to know how to process all the data it's
|
189
190
|
retreiving from Kinesis. That's done by creating a [record
|
190
191
|
processor](http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-java.html#kinesis-record-processor-implementation-interface-java)
|
191
|
-
and telling a `
|
192
|
+
and telling a `KCL` how to create a processor when it becomes
|
192
193
|
responsible for a shard.
|
193
194
|
|
194
195
|
We highly recommend reading the [official
|
@@ -213,13 +214,15 @@ Defining and creating a simple processor might look like:
|
|
213
214
|
require 'telekinesis'
|
214
215
|
|
215
216
|
class MyProcessor
|
216
|
-
def init(
|
217
|
-
@shard_id = shard_id
|
217
|
+
def init(init_input)
|
218
|
+
@shard_id = init_input.shard_id
|
218
219
|
$stderr.puts "Started processing #{@shard_id}"
|
219
220
|
end
|
220
221
|
|
221
|
-
def process_records(
|
222
|
-
records.each
|
222
|
+
def process_records(process_records_input)
|
223
|
+
process_records_input.records.each do
|
224
|
+
|r| puts "key=#{r.partition_key} value=#{String.from_java_bytes(r.data.array)}"
|
225
|
+
end
|
223
226
|
end
|
224
227
|
|
225
228
|
def shutdown
|
@@ -227,7 +230,7 @@ class MyProcessor
|
|
227
230
|
end
|
228
231
|
end
|
229
232
|
|
230
|
-
Telekinesis::Consumer::
|
233
|
+
Telekinesis::Consumer::KCL.new(stream: 'some-events', app: 'example') do
|
231
234
|
MyProcessor.new
|
232
235
|
end
|
233
236
|
```
|
@@ -240,8 +243,8 @@ processor.
|
|
240
243
|
```ruby
|
241
244
|
require 'telekinesis'
|
242
245
|
|
243
|
-
Telekinesis::Consumer::
|
244
|
-
Telekinesis::Consumer::Block.new do |records, checkpointer|
|
246
|
+
Telekinesis::Consumer::KCL.new(stream: 'some-events', app: 'example') do
|
247
|
+
Telekinesis::Consumer::Block.new do |records, checkpointer, millis_behind|
|
245
248
|
records.each {|r| puts "key=#{r.partition_key} value=#{String.from_java_bytes(r.data.array)}" }
|
246
249
|
end
|
247
250
|
end
|
@@ -290,12 +293,12 @@ used to checkpoint all records that have been passed to the processor so far
|
|
290
293
|
(by just calling `checkpointer.checkpoint`) or up to a particular sequence
|
291
294
|
number (by calling `checkpointer.checkpoint(record.sequence_number)`).
|
292
295
|
|
293
|
-
While a `
|
294
|
-
|
295
|
-
|
296
|
-
|
297
|
-
|
298
|
-
|
296
|
+
While a `KCL` consumer can be initialized with an `:initial_position_in_stream`
|
297
|
+
option, any existing checkpoint for a shard will take precedent over that
|
298
|
+
value. Furthermore, any existing STATE in DynamoDB will take precedent, so if
|
299
|
+
you start a consumer with `initial_position_in_stream: 'LATEST'` and then
|
300
|
+
restart with `initial_position_in_stream: 'TRIM_HORIZON'` you still end up
|
301
|
+
starting from `LATEST`.
|
299
302
|
|
300
303
|
## Java client logging
|
301
304
|
|
data/ext/pom.xml
CHANGED
@@ -1,17 +1,25 @@
|
|
1
1
|
package com.kickstarter.jruby;
|
2
2
|
|
3
|
+
import com.amazonaws.services.cloudwatch.AmazonCloudWatch;
|
4
|
+
import com.amazonaws.services.cloudwatch.AmazonCloudWatchClient;
|
5
|
+
import com.amazonaws.services.dynamodbv2.AmazonDynamoDB;
|
6
|
+
import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient;
|
7
|
+
import com.amazonaws.services.kinesis.AmazonKinesis;
|
8
|
+
import com.amazonaws.services.kinesis.AmazonKinesisClient;
|
9
|
+
import com.amazonaws.services.kinesis.leases.exceptions.LeasingException;
|
10
|
+
import com.amazonaws.services.kinesis.leases.impl.KinesisClientLeaseManager;
|
3
11
|
import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorCheckpointer;
|
4
12
|
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration;
|
5
13
|
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.Worker;
|
6
|
-
import com.amazonaws.services.kinesis.clientlibrary.types.
|
14
|
+
import com.amazonaws.services.kinesis.clientlibrary.types.InitializationInput;
|
15
|
+
import com.amazonaws.services.kinesis.clientlibrary.types.ProcessRecordsInput;
|
16
|
+
import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownInput;
|
7
17
|
import com.amazonaws.services.kinesis.model.Record;
|
8
18
|
|
9
|
-
import java.util.List;
|
10
|
-
|
11
19
|
/**
|
12
20
|
* A shim that makes it possible to use the Kinesis Client Library from JRuby.
|
13
21
|
* Without the shim, {@code initialize} method in
|
14
|
-
* {@link com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor}
|
22
|
+
* {@link com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor}
|
15
23
|
* conflicts with the special {@code initialize} method in Ruby. The shim
|
16
24
|
* interface renames {@code initialize} to {@code init}.
|
17
25
|
* <p />
|
@@ -36,67 +44,71 @@ public class Telekinesis {
|
|
36
44
|
* {@link IRecordProcessorFactory}.
|
37
45
|
*/
|
38
46
|
public static Worker newWorker(final KinesisClientLibConfiguration config, final IRecordProcessorFactory factory) {
|
39
|
-
|
47
|
+
com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessorFactory v2Factory = new com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessorFactory() {
|
40
48
|
@Override
|
41
|
-
public com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor createProcessor() {
|
49
|
+
public com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor createProcessor() {
|
42
50
|
return new RecordProcessorShim(factory.createProcessor());
|
43
51
|
}
|
44
|
-
}
|
52
|
+
};
|
53
|
+
return new Worker.Builder()
|
54
|
+
.recordProcessorFactory(v2Factory)
|
55
|
+
.config(config)
|
56
|
+
.build();
|
45
57
|
}
|
46
58
|
|
47
59
|
// ========================================================================
|
48
60
|
/**
|
49
61
|
* A shim that wraps a {@link IRecordProcessor} so it can get used by the KCL.
|
50
62
|
*/
|
51
|
-
private static class RecordProcessorShim implements com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor {
|
63
|
+
private static class RecordProcessorShim implements com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor {
|
52
64
|
private final IRecordProcessor underlying;
|
53
65
|
|
54
66
|
public RecordProcessorShim(final IRecordProcessor underlying) { this.underlying = underlying; }
|
55
67
|
|
56
68
|
@Override
|
57
|
-
public void initialize(final
|
58
|
-
underlying.init(
|
69
|
+
public void initialize(final InitializationInput initializationInput) {
|
70
|
+
underlying.init(initializationInput);
|
59
71
|
}
|
60
72
|
|
61
73
|
@Override
|
62
|
-
public void processRecords(final
|
63
|
-
underlying.processRecords(
|
74
|
+
public void processRecords(final ProcessRecordsInput processRecordsInput) {
|
75
|
+
underlying.processRecords(processRecordsInput);
|
64
76
|
}
|
65
77
|
|
66
78
|
@Override
|
67
|
-
public void shutdown(final
|
68
|
-
underlying.shutdown(
|
79
|
+
public void shutdown(final ShutdownInput shutdownInput) {
|
80
|
+
underlying.shutdown(shutdownInput);
|
69
81
|
}
|
70
82
|
}
|
71
83
|
|
72
84
|
/**
|
73
|
-
* A parallel {@link com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor}
|
85
|
+
* A parallel {@link com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor}
|
74
86
|
* that avoids naming conflicts with reserved words in Ruby.
|
75
87
|
*/
|
76
88
|
public static interface IRecordProcessor {
|
77
89
|
/**
|
78
|
-
* @see com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor#initialize(
|
90
|
+
* @see com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor#initialize(InitializationInput)
|
79
91
|
*/
|
80
|
-
void init(
|
92
|
+
void init(InitializationInput initializationInput);
|
81
93
|
|
82
94
|
/**
|
83
|
-
* @see com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor#processRecords(
|
95
|
+
* @see com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor#processRecords(ProcessRecordsInput)
|
84
96
|
*/
|
85
|
-
void processRecords(
|
97
|
+
void processRecords(ProcessRecordsInput processRecordsInput);
|
86
98
|
|
87
99
|
/**
|
88
|
-
* @see com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessor#shutdown(
|
100
|
+
* @see com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessor#shutdown(ShutdownInput)
|
89
101
|
*/
|
90
|
-
void shutdown(
|
102
|
+
void shutdown(ShutdownInput shutdownInput);
|
91
103
|
}
|
92
104
|
|
93
105
|
/**
|
94
|
-
* A parallel {@link com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorFactory}
|
106
|
+
* A parallel {@link com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessorFactory}
|
95
107
|
* for {@link IRecordProcessor}.
|
96
108
|
*/
|
97
109
|
public static interface IRecordProcessorFactory {
|
98
110
|
/**
|
99
|
-
* @see com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorFactory#createProcessor()
|
111
|
+
* @see com.amazonaws.services.kinesis.clientlibrary.interfaces.v2.IRecordProcessorFactory#createProcessor()
|
100
112
|
*/
|
101
113
|
IRecordProcessor createProcessor();
|
102
114
|
}
|
@@ -4,9 +4,9 @@ module Telekinesis
|
|
4
4
|
# IRecordProcessor methods. Override it to implement simple IRecordProcessors
|
5
5
|
# that don't need to do anything special on init or shutdown.
|
6
6
|
class BaseProcessor
|
7
|
-
def init(
|
8
|
-
def process_records(
|
9
|
-
def shutdown(
|
7
|
+
def init(initialization_input); end
|
8
|
+
def process_records(process_records_input); end
|
9
|
+
def shutdown(shutdown_input); end
|
10
10
|
end
|
11
11
|
end
|
12
12
|
end
|
@@ -4,8 +4,10 @@ module Telekinesis
|
|
4
4
|
# quickly define a consumer.
|
5
5
|
#
|
6
6
|
# Telekinesis::Consumer::Worker.new(stream: 'my-stream', app: 'tail') do
|
7
|
-
# Telekinesis::Consumer::Block.new do |records, checkpointer|
|
7
|
+
# Telekinesis::Consumer::Block.new do |records, checkpointer, millis_behind_latest|
|
8
8
|
# records.each {|r| puts r}
|
9
|
+
# $stderr.puts "#{millis_behind_latest} ms behind"
|
10
|
+
# checkpointer.checkpoint
|
9
11
|
# end
|
10
12
|
# end
|
11
13
|
class Block < BaseProcessor
|
@@ -14,8 +16,8 @@ module Telekinesis
|
|
14
16
|
@block = block
|
15
17
|
end
|
16
18
|
|
17
|
-
def process_records(
|
18
|
-
@block.call(records, checkpointer)
|
19
|
+
def process_records(input)
|
20
|
+
@block.call(input.records, input.checkpointer, input.millis_behind_latest)
|
19
21
|
end
|
20
22
|
end
|
21
23
|
end
|
@@ -3,29 +3,31 @@ module Telekinesis
|
|
3
3
|
java_import com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream
|
4
4
|
java_import com.amazonaws.services.kinesis.clientlibrary.lib.worker.KinesisClientLibConfiguration
|
5
5
|
|
6
|
-
class
|
7
|
-
# Create a new consumer that consumes data from a Kinesis stream
|
8
|
-
#
|
9
|
-
#
|
10
|
-
#
|
6
|
+
class KCL
|
7
|
+
# Create a new consumer that consumes data from a Kinesis stream using the
|
8
|
+
# AWS Kinesis Client Library.
|
9
|
+
#
|
10
|
+
# The KCL uses DynamoDB to register clients as part of the an application
|
11
|
+
# and evenly distribute work between all of the clients registered for
|
12
|
+
# the same application. See the AWS Docs for more information:
|
11
13
|
#
|
12
14
|
# http://docs.aws.amazon.com/kinesis/latest/dev/developing-consumer-apps-with-kcl.html
|
13
15
|
#
|
14
|
-
#
|
15
|
-
#
|
16
|
+
# KCLs are configured with a hash. The Kinesis `:stream` to consume from
|
17
|
+
# is required.
|
16
18
|
#
|
17
|
-
#
|
18
|
-
#
|
19
|
-
#
|
20
|
-
#
|
21
|
-
#
|
22
|
-
#
|
19
|
+
# KCL clients operate in groups. All consumers with the same `:app` id use
|
20
|
+
# DynamoDB to attempt to distribute work evenly among themselves. The
|
21
|
+
# `:worker_id` is used to distinguish individual clients (`:worker_id`
|
22
|
+
# defaults to the current hostname. If you plan to run more than one KCL
|
23
|
+
# client in the same `:app` on the same host, make sure you set this to
|
24
|
+
# something unique!).
|
23
25
|
#
|
24
|
-
# Any other valid KCL Worker `:options` may be passed as a hash.
|
26
|
+
# Any other valid KCL Worker `:options` may be passed as a nested hash.
|
25
27
|
#
|
26
28
|
# For example, to configure a `tail` app on `some-stream` and use the
|
27
29
|
# default `:worker_id`, you might pass the following configuration to your
|
28
|
-
#
|
30
|
+
# KCL.
|
29
31
|
#
|
30
32
|
# config = {
|
31
33
|
# app: 'tail',
|
@@ -33,12 +35,12 @@ module Telekinesis
|
|
33
35
|
# options: {initial_position_in_stream: 'TRIM_HORIZON'}
|
34
36
|
# }
|
35
37
|
#
|
36
|
-
# To actually process the stream, a
|
37
|
-
#
|
38
|
-
#
|
39
|
-
# `
|
38
|
+
# To actually process the stream, a KCL client creates record processors.
|
39
|
+
# These are objects that correspond to the KCL's RecordProcessor
|
40
|
+
# interface - processors must implement `init`, `process_records`, and
|
41
|
+
# `shutdown` methods.
|
40
42
|
#
|
41
|
-
# http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-java.html#
|
43
|
+
# http://docs.aws.amazon.com/kinesis/latest/dev/kinesis-record-processor-implementation-app-java.html#kcl-java-interface-v2
|
42
44
|
#
|
43
45
|
# To specify which record processor to create, pass a block to your
|
44
46
|
# distribtued consumer that returns a new record processor. This block
|
@@ -47,17 +49,21 @@ module Telekinesis
|
|
47
49
|
#
|
48
50
|
# Telekinesis provides a BaseProcessor that implements no-op versions
|
49
51
|
# of all of the required methods to make writing quick processors easier
|
50
|
-
# and a Block processor that executes the
|
52
|
+
# and a Block processor that executes the given block every time
|
51
53
|
# `process_records` is called.
|
52
54
|
#
|
53
|
-
# To write a stream tailer, you might use Block as follows:
|
55
|
+
# To write a simple stream tailer, you might use Block as follows:
|
54
56
|
#
|
55
|
-
# Telekinesis::Consumer::
|
56
|
-
# Telekinesis::Consumer::
|
57
|
-
# records.each
|
57
|
+
# kcl_worker = Telekinesis::Consumer::KCL.new(config) do
|
58
|
+
# Telekinesis::Consumer::BlockProcessor.new do |records, checkpointer, millis_behind_latest|
|
59
|
+
# records.each{|r| puts r}
|
60
|
+
# $stderr.puts "#{millis_behind_latest} ms behind"
|
61
|
+
# checkpointer.checkpoint
|
58
62
|
# end
|
59
63
|
# end
|
60
64
|
#
|
65
|
+
# kcl_worker.run
|
66
|
+
#
|
61
67
|
def initialize(config, &block)
|
62
68
|
raise ArgumentError, "No block given!" unless block_given?
|
63
69
|
kcl_config = self.class.build_config(config)
|
data/lib/telekinesis/consumer.rb
CHANGED
Binary file
|
data/lib/telekinesis/version.rb
CHANGED
data/telekinesis.gemspec
CHANGED
@@ -12,7 +12,6 @@ Gem::Specification.new do |spec|
|
|
12
12
|
spec.platform = "java"
|
13
13
|
spec.files = `git ls-files`.split($/) + Dir.glob("lib/telekinesis/*.jar")
|
14
14
|
spec.require_paths = ["lib"]
|
15
|
-
spec.add_dependency "aws-sdk"
|
16
15
|
|
17
16
|
spec.add_development_dependency "rake"
|
18
17
|
spec.add_development_dependency "nokogiri"
|
metadata
CHANGED
@@ -1,29 +1,15 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: telekinesis
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version:
|
4
|
+
version: 3.0.0
|
5
5
|
platform: java
|
6
6
|
authors:
|
7
7
|
- Ben Linsay
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2015-10-
|
11
|
+
date: 2015-10-28 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
|
-
- !ruby/object:Gem::Dependency
|
14
|
-
requirement: !ruby/object:Gem::Requirement
|
15
|
-
requirements:
|
16
|
-
- - '>='
|
17
|
-
- !ruby/object:Gem::Version
|
18
|
-
version: '0'
|
19
|
-
name: aws-sdk
|
20
|
-
prerelease: false
|
21
|
-
type: :runtime
|
22
|
-
version_requirements: !ruby/object:Gem::Requirement
|
23
|
-
requirements:
|
24
|
-
- - '>='
|
25
|
-
- !ruby/object:Gem::Version
|
26
|
-
version: '0'
|
27
13
|
- !ruby/object:Gem::Dependency
|
28
14
|
requirement: !ruby/object:Gem::Requirement
|
29
15
|
requirements:
|
@@ -98,11 +84,10 @@ files:
|
|
98
84
|
- lib/telekinesis/aws.rb
|
99
85
|
- lib/telekinesis/aws/client_adapter.rb
|
100
86
|
- lib/telekinesis/aws/java_client_adapter.rb
|
101
|
-
- lib/telekinesis/aws/ruby_client_adapter.rb
|
102
87
|
- lib/telekinesis/consumer.rb
|
103
88
|
- lib/telekinesis/consumer/base_processor.rb
|
104
89
|
- lib/telekinesis/consumer/block.rb
|
105
|
-
- lib/telekinesis/consumer/
|
90
|
+
- lib/telekinesis/consumer/kcl.rb
|
106
91
|
- lib/telekinesis/java_util.rb
|
107
92
|
- lib/telekinesis/logging/java_logging.rb
|
108
93
|
- lib/telekinesis/logging/ruby_logger_handler.rb
|
@@ -121,7 +106,7 @@ files:
|
|
121
106
|
- test/producer/test_helper.rb
|
122
107
|
- test/producer/test_sync_producer.rb
|
123
108
|
- test/test_helper.rb
|
124
|
-
- lib/telekinesis/telekinesis-
|
109
|
+
- lib/telekinesis/telekinesis-3.0.0.jar
|
125
110
|
homepage: https://github.com/kickstarter/telekinesis
|
126
111
|
licenses: []
|
127
112
|
metadata: {}
|
@@ -1,40 +0,0 @@
|
|
1
|
-
module Telekinesis
|
2
|
-
module Aws
|
3
|
-
# A ClientAdapter that wraps the ruby aws-sdk gem (version 2).
|
4
|
-
#
|
5
|
-
# Since the aws-sdk gem does not appear to be thread-safe, this adapter
|
6
|
-
# should not be considered thread safe.
|
7
|
-
class RubyClientAdapter < ClientAdapter
|
8
|
-
# Build a new client adapter. Credentials are passed directly to the
|
9
|
-
# constructor for Aws::Kinesis::Client.
|
10
|
-
#
|
11
|
-
# See: http://docs.aws.amazon.com/sdkforruby/api/Aws/Kinesis/Client.html
|
12
|
-
def self.build(credentials)
|
13
|
-
new(Aws::Kinesis::Client.new(credentials))
|
14
|
-
end
|
15
|
-
|
16
|
-
def put_record(stream, key, value)
|
17
|
-
@client.put_record(stream: stream, partition_key: key, data: value)
|
18
|
-
rescue Aws::Errors::ServiceError => e
|
19
|
-
raise KinesisError.new(e)
|
20
|
-
end
|
21
|
-
|
22
|
-
protected
|
23
|
-
|
24
|
-
def do_put_records(stream, items)
|
25
|
-
@client.put_records(build_put_records_request(stream, items)).flat_map do |page|
|
26
|
-
page.records
|
27
|
-
end
|
28
|
-
rescue Aws::Errors::ServiceError => e
|
29
|
-
raise KinesisError.new(e)
|
30
|
-
end
|
31
|
-
|
32
|
-
def build_put_records_request(stream, items)
|
33
|
-
{
|
34
|
-
stream: stream,
|
35
|
-
records: items.map{|k, v| {partition_key: k, data: v}}
|
36
|
-
}
|
37
|
-
end
|
38
|
-
end
|
39
|
-
end
|
40
|
-
end
|