ruby-kafka 0.6.0.beta2 → 0.6.0.beta3

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a8de7c8abb93903beb344e75576f60227f42e03b4fc05cfd535b69ed71443c20
4
- data.tar.gz: e23592d1e5c00e145cba041729ba69ba80a6ed7452170d6fc0a04fbb472b42cd
3
+ metadata.gz: b0de3ad482f6130c080cc00b0dd7f4d862861838cfd6c762f6235defada9652f
4
+ data.tar.gz: c97c1432402dc5d78816ffb0c251e1c808c4ebaaad7609e3f8b7265d0661db8a
5
5
  SHA512:
6
- metadata.gz: f3c7af12d2699a189d10e12117ecf06b315118fa373dc9365c50578dea3d5b1bc55f06401bd2bb044c778cf036b99daba9c4bc1a6cc6770eefa7c266ecbd0521
7
- data.tar.gz: bfe265ae1b2af3024ffb10de3dc627cfda81b75916f7ade3d1537cc5ccae00dd6c134af70f5f6871be1a788e85ad2b4952db779588b59d17bd1bd1d9963b9dd4
6
+ metadata.gz: 647f4e5908073b08c597e90bc229c9962cf53121590297857caa641ebed7007331aa434238a719584b5c4ef66668a3a9fbe9a732105577b51f3fbc2539b6b3c9
7
+ data.tar.gz: dce76f84ed3ad8a010935321adc225f5ac757c33cfb39fee8d84c67c4f36bc4fdb053123f84638324695a4ec834d5765a0f44abd66eeb0c153762907f1737e55
data/README.md CHANGED
@@ -37,6 +37,7 @@ A Ruby client library for [Apache Kafka](http://kafka.apache.org/), a distribute
37
37
  9. [Security](#security)
38
38
  1. [Encryption and Authentication using SSL](#encryption-and-authentication-using-ssl)
39
39
  2. [Authentication using SASL](#authentication-using-sasl)
40
+ 10. [Topic management](#topic-management)
40
41
  4. [Design](#design)
41
42
  1. [Producer Design](#producer-design)
42
43
  2. [Asynchronous Producer Design](#asynchronous-producer-design)
@@ -966,6 +967,77 @@ kafka = Kafka.new(
966
967
  )
967
968
  ```
968
969
 
970
+ ### Topic management
971
+
972
+ In addition to producing and consuming messages, ruby-kafka supports managing Kafka topics and their configurations. See [the Kafka documentation](https://kafka.apache.org/documentation/#topicconfigs) for a full list of topic configuration keys.
973
+
974
+ #### List all topics
975
+
976
+ Return an array of topic names.
977
+
978
+ ```ruby
979
+ kafka = Kafka.new(["kafka:9092"])
980
+ kafka.topics
981
+ # => ["topic1", "topic2", "topic3"]
982
+ ```
983
+
984
+ #### Create a topic
985
+
986
+ ```ruby
987
+ kafka = Kafka.new(["kafka:9092"])
988
+ kafka.create_topic("topic")
989
+ ```
990
+
991
+ By default, the new topic has 1 partition, replication factor 1 and default configs from the brokers. Those configurations are customizable:
992
+
993
+ ```ruby
994
+ kafka = Kafka.new(["kafka:9092"])
995
+ kafka.create_topic("topic",
996
+ num_partitions: 3,
997
+ replication_factor: 2,
998
+ config: {
999
+ "max.message.bytes" => 100000
1000
+ }
1001
+ )
1002
+ ```
1003
+
1004
+ #### Create more partitions for a topic
1005
+
1006
+ After a topic is created, you can increase the number of partitions for the topic. The new number of partitions must be creater than the current one.
1007
+
1008
+ ```ruby
1009
+ kafka = Kafka.new(["kafka:9092"])
1010
+ kafka.create_partitions_for("topic", num_partitions: 10)
1011
+ ```
1012
+
1013
+ #### Fetch configuration for a topic (alpha feature)
1014
+
1015
+ ```ruby
1016
+ kafka = Kafka.new(["kafka:9092"])
1017
+ kafka.describe_topic("topic", ["max.message.bytes", "retention.ms"])
1018
+ # => {"max.message.bytes"=>"100000", "retention.ms"=>"604800000"}
1019
+ ```
1020
+
1021
+ #### Alter a topic configuration (alpha feature)
1022
+
1023
+ Update the topic configurations.
1024
+
1025
+ **NOTE**: This feature is for advanced usage. Only use this if you know what you're doing.
1026
+
1027
+ ```ruby
1028
+ kafka = Kafka.new(["kafka:9092"])
1029
+ kafka.alter_topic("topic", "max.message.bytes" => 100000, "retention.ms" => 604800000)
1030
+ ```
1031
+
1032
+ #### Delete a topic
1033
+
1034
+ ```ruby
1035
+ kafka = Kafka.new(["kafka:9092"])
1036
+ kafka.delete_topic("topic")
1037
+ ```
1038
+
1039
+ After a topic is marked as deleted, Kafka only hides it from clients. It would take a while before a topic is completely deleted.
1040
+
969
1041
  ## Design
970
1042
 
971
1043
  The library has been designed as a layered system, with each layer having a clear responsibility:
data/lib/kafka/broker.rb CHANGED
@@ -132,6 +132,12 @@ module Kafka
132
132
  send_request(request)
133
133
  end
134
134
 
135
+ def alter_configs(**options)
136
+ request = Protocol::AlterConfigsRequest.new(**options)
137
+
138
+ send_request(request)
139
+ end
140
+
135
141
  def create_partitions(**options)
136
142
  request = Protocol::CreatePartitionsRequest.new(**options)
137
143
 
data/lib/kafka/client.rb CHANGED
@@ -520,6 +520,24 @@ module Kafka
520
520
  @cluster.describe_topic(name, configs)
521
521
  end
522
522
 
523
+ # Alter the configuration of a topic.
524
+ #
525
+ # Configuration keys must match
526
+ # [Kafka's topic-level configs](https://kafka.apache.org/documentation/#topicconfigs).
527
+ #
528
+ # @note This is an alpha level API and is subject to change.
529
+ #
530
+ # @example Describing the cleanup policy config of a topic
531
+ # kafka = Kafka.new(["kafka1:9092"])
532
+ # kafka.alter_topic("my-topic", "cleanup.policy" => "delete", "max.message.byte" => "100000")
533
+ #
534
+ # @param name [String] the name of the topic.
535
+ # @param configs [Hash<String, String>] hash of desired config keys and values.
536
+ # @return [nil]
537
+ def alter_topic(name, configs = {})
538
+ @cluster.alter_topic(name, configs)
539
+ end
540
+
523
541
  # Create partitions for a topic.
524
542
  #
525
543
  # @param name [String] the name of the topic.
data/lib/kafka/cluster.rb CHANGED
@@ -235,6 +235,24 @@ module Kafka
235
235
  end
236
236
  end
237
237
 
238
+ def alter_topic(name, configs = {})
239
+ options = {
240
+ resources: [[Kafka::Protocol::RESOURCE_TYPE_TOPIC, name, configs]]
241
+ }
242
+
243
+ broker = controller_broker
244
+
245
+ @logger.info "Altering the config for topic `#{name}` using controller broker #{broker}"
246
+
247
+ response = broker.alter_configs(**options)
248
+
249
+ response.resources.each do |resource|
250
+ Protocol.handle_error(resource.error_code, resource.error_message)
251
+ end
252
+
253
+ nil
254
+ end
255
+
238
256
  def create_partitions_for(name, num_partitions:, timeout:)
239
257
  options = {
240
258
  topics: [[name, num_partitions, nil]],
data/lib/kafka/datadog.rb CHANGED
@@ -349,5 +349,20 @@ module Kafka
349
349
 
350
350
  attach_to "async_producer.kafka"
351
351
  end
352
+
353
+ class FetcherSubscriber < StatsdSubscriber
354
+ def loop(event)
355
+ queue_size = event.payload.fetch(:queue_size)
356
+
357
+ tags = {
358
+ client: event.payload.fetch(:client_id),
359
+ group_id: event.payload.fetch(:group_id),
360
+ }
361
+
362
+ gauge("fetcher.queue_size", queue_size, tags: tags)
363
+ end
364
+
365
+ attach_to "fetcher.kafka"
366
+ end
352
367
  end
353
368
  end
data/lib/kafka/fetcher.rb CHANGED
@@ -75,6 +75,10 @@ module Kafka
75
75
  private
76
76
 
77
77
  def loop
78
+ @instrumenter.instrument("loop.fetcher", {
79
+ queue_size: @queue.size,
80
+ })
81
+
78
82
  if !@commands.empty?
79
83
  cmd, args = @commands.deq
80
84
 
@@ -99,6 +103,7 @@ module Kafka
99
103
 
100
104
  def handle_reset
101
105
  @next_offsets.clear
106
+ @queue.clear
102
107
  end
103
108
 
104
109
  def handle_stop(*)
@@ -28,6 +28,7 @@ module Kafka
28
28
  CREATE_TOPICS_API = 19
29
29
  DELETE_TOPICS_API = 20
30
30
  DESCRIBE_CONFIGS_API = 32
31
+ ALTER_CONFIGS_API = 33
31
32
  CREATE_PARTITIONS_API = 37
32
33
 
33
34
  # A mapping from numeric API keys to symbolic API names.
@@ -170,5 +171,7 @@ require "kafka/protocol/delete_topics_request"
170
171
  require "kafka/protocol/delete_topics_response"
171
172
  require "kafka/protocol/describe_configs_request"
172
173
  require "kafka/protocol/describe_configs_response"
174
+ require "kafka/protocol/alter_configs_request"
175
+ require "kafka/protocol/alter_configs_response"
173
176
  require "kafka/protocol/create_partitions_request"
174
177
  require "kafka/protocol/create_partitions_response"
@@ -0,0 +1,42 @@
1
+ module Kafka
2
+ module Protocol
3
+
4
+ class AlterConfigsRequest
5
+ def initialize(resources:)
6
+ @resources = resources
7
+ end
8
+
9
+ def api_key
10
+ ALTER_CONFIGS_API
11
+ end
12
+
13
+ def api_version
14
+ 0
15
+ end
16
+
17
+ def response_class
18
+ Protocol::AlterConfigsResponse
19
+ end
20
+
21
+ def encode(encoder)
22
+ encoder.write_array(@resources) do |type, name, configs|
23
+ encoder.write_int8(type)
24
+ encoder.write_string(name)
25
+
26
+ configs = configs.to_a
27
+ encoder.write_array(configs) do |config_name, config_value|
28
+ # Config value is nullable. In other cases, we must write the
29
+ # stringified value.
30
+ config_value = config_value.to_s unless config_value.nil?
31
+
32
+ encoder.write_string(config_name)
33
+ encoder.write_string(config_value)
34
+ end
35
+ end
36
+ # validate_only. We'll skip this feature.
37
+ encoder.write_boolean(false)
38
+ end
39
+ end
40
+
41
+ end
42
+ end
@@ -0,0 +1,47 @@
1
+ module Kafka
2
+ module Protocol
3
+ class AlterConfigsResponse
4
+ class ResourceDescription
5
+ attr_reader :name, :type, :error_code, :error_message
6
+
7
+ def initialize(name:, type:, error_code:, error_message:)
8
+ @name = name
9
+ @type = type
10
+ @error_code = error_code
11
+ @error_message = error_message
12
+ end
13
+ end
14
+
15
+ attr_reader :resources
16
+
17
+ def initialize(throttle_time_ms:, resources:)
18
+ @throttle_time_ms = throttle_time_ms
19
+ @resources = resources
20
+ end
21
+
22
+ def self.decode(decoder)
23
+ throttle_time_ms = decoder.int32
24
+ resources = decoder.array do
25
+ error_code = decoder.int16
26
+ error_message = decoder.string
27
+
28
+ resource_type = decoder.int8
29
+ if Kafka::Protocol::RESOURCE_TYPES[resource_type].nil?
30
+ raise Kafka::ProtocolError, "Resource type not supported: #{resource_type}"
31
+ end
32
+ resource_name = decoder.string
33
+
34
+ ResourceDescription.new(
35
+ type: RESOURCE_TYPES[resource_type],
36
+ name: resource_name,
37
+ error_code: error_code,
38
+ error_message: error_message
39
+ )
40
+ end
41
+
42
+ new(throttle_time_ms: throttle_time_ms, resources: resources)
43
+ end
44
+ end
45
+
46
+ end
47
+ end
@@ -27,8 +27,8 @@ module Kafka
27
27
  # Replica assignments. We don't care.
28
28
  encoder.write_array([])
29
29
 
30
- # Config entries. We don't care.
31
30
  encoder.write_array(config.fetch(:config)) do |config_name, config_value|
31
+ config_value = config_value.to_s unless config_value.nil?
32
32
  encoder.write_string(config_name)
33
33
  encoder.write_string(config_value)
34
34
  end
data/lib/kafka/statsd.rb CHANGED
@@ -258,5 +258,17 @@ module Kafka
258
258
 
259
259
  attach_to "async_producer.kafka"
260
260
  end
261
+
262
+ class FetcherSubscriber < StatsdSubscriber
263
+ def loop(event)
264
+ queue_size = event.payload.fetch(:queue_size)
265
+ client = event.payload.fetch(:client_id)
266
+ group_id = event.payload.fetch(:group_id)
267
+
268
+ gauge("fetcher.#{client}.#{group_id}.queue_size", queue_size)
269
+ end
270
+
271
+ attach_to "fetcher.kafka"
272
+ end
261
273
  end
262
274
  end
data/lib/kafka/version.rb CHANGED
@@ -1,3 +1,3 @@
1
1
  module Kafka
2
- VERSION = "0.6.0.beta2"
2
+ VERSION = "0.6.0.beta3"
3
3
  end
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby-kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.6.0.beta2
4
+ version: 0.6.0.beta3
5
5
  platform: ruby
6
6
  authors:
7
7
  - Daniel Schierbeck
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2018-04-17 00:00:00.000000000 Z
11
+ date: 2018-04-24 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: bundler
@@ -336,6 +336,8 @@ files:
336
336
  - lib/kafka/produce_operation.rb
337
337
  - lib/kafka/producer.rb
338
338
  - lib/kafka/protocol.rb
339
+ - lib/kafka/protocol/alter_configs_request.rb
340
+ - lib/kafka/protocol/alter_configs_response.rb
339
341
  - lib/kafka/protocol/api_versions_request.rb
340
342
  - lib/kafka/protocol/api_versions_response.rb
341
343
  - lib/kafka/protocol/consumer_group_protocol.rb