kafka 0.5.1 → 0.5.2

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a6970af4b6ef931eb091068ee75220a2ce7a3f79a3afa7abd0dc40c9b6f88a18
4
- data.tar.gz: cdda5ed68a3f9310fe6ce33930d3ebff71f5d9c53b735c2412e6a72fce9ba546
3
+ metadata.gz: ea31d6c027c3e63e1b9933bd8f130bf2d216c3e96e09ea7c583c8d15e0f0981d
4
+ data.tar.gz: 2c55aea758996dd97498d8fc92abd52bdb680311113cccf3afcbc26b2fb3ef46
5
5
  SHA512:
6
- metadata.gz: 1cd6d75ccbb42e8d37d763f542c5c267fae092bb26558057f2f3b04de92d27021b4a57361ccf12ea40a8ebfcc234a651b27c71341ecb57327e9cbd32c486105f
7
- data.tar.gz: 7cc3f4d3b70f11b20af2ed8d6fce065d8abdf5a53c5ab60bb39cfbb59c1a306b6eb876337bc2b7370bd83377934942e24868c46db5b511766a5f7296f2bea7bf
6
+ metadata.gz: 423144f6caf3b753e61aaa8db147139523eb5e0eb2f72948bc83a52642e1f7abcdc9dc36d7323cdf755d38d71293a55f8730a659c572751ef29ebb9107e5e45b
7
+ data.tar.gz: d2d78e83b8adf7da5fab16e0c288d68f70b603dec02c1671fee4359ff9af12ce3f08b8d8198eeee3eb16a5e1c349faf5060c78a377742a5ab4289e2b533638ef
@@ -1,5 +1,16 @@
1
1
  ## Unreleased
2
2
 
3
+ ## 0.5.2 / 2020-01-27
4
+
5
+ * Fixes DeliveryReport#error? and DeliveryReport#successful? being swapped
6
+ * Fixes Kafka::FFI::Client#cluster_id being incorrectly bound
7
+ * Fixes naming issues in Message#headers and Message#detach_headers
8
+ * Fixes naming issue in Config#set_ssl_cert
9
+ * Fixes passing nil for a Kafka::FFI::Opaque
10
+ * Adds all current RD_KAFKA_RESP_ERR_ constants
11
+ * Adds Kafka::QueueFullError for when producer queue is full
12
+ * Adds bindings for built in partitioners
13
+
3
14
  ## 0.5.1 / 2020-01-22
4
15
 
5
16
  * Kafka::Consumer now uses poll_set_consumer instead of a Poller thread
data/README.md CHANGED
@@ -2,12 +2,17 @@
2
2
 
3
3
  [![Build Status](https://travis-ci.com/deadmanssnitch/kafka.svg?branch=master)](https://travis-ci.com/deadmanssnitch/kafka)
4
4
  [![Gem Version](https://badge.fury.io/rb/kafka.svg)](https://badge.fury.io/rb/kafka)
5
+ [![Documentation](https://img.shields.io/badge/-Documentation-success)](https://deadmanssnitch.com/opensource/kafka/docs/)
5
6
 
6
- The kafka gem provides a general producer and consumer for
7
- [Apache Kafka](https://kafka.apache.org) using bindings to the official
8
- [C client librdkafka](https://github.com/edenhill/librdkafka). The `Kafka::FFI`
9
- module implements an object oriented mapping to most of the librdkafka API,
10
- making it easier and safer to use than calling functions directly.
7
+ Kafka provides a Ruby client for [Apache Kafka](https://kafka.apache.org) that
8
+ leverages [librdkafka](https://github.com/edenhill/librdkafka) for its
9
+ performance and general correctness.
10
+
11
+ ## Features
12
+ - Thread safe Producer with sync and async delivery reporting
13
+ - High-level balanced Consumer
14
+ - Admin client
15
+ - Object oriented librdkafka mappings for easy custom implementations
11
16
 
12
17
  ## ⚠️ Project Status: Beta ⚠️
13
18
 
@@ -24,6 +29,8 @@ against your application and traffic, implement missing functions (see
24
29
  `rake ffi:missing`), work with the API and make suggestions for improvements.
25
30
  All help is wanted and appreciated.
26
31
 
32
+ Kafka is currently in production usage at [Dead Man's Snitch](https://deadmanssnitch.com).
33
+
27
34
  ## Installation
28
35
 
29
36
  Add this line to your application's Gemfile:
@@ -40,15 +47,15 @@ Or install it yourself as:
40
47
 
41
48
  $ gem install kafka
42
49
 
43
- ## Usage
50
+ ## Getting Started
44
51
 
45
52
  For more examples see [the examples directory](examples/).
46
53
 
47
54
  For a detailed introduction on librdkafka which would be useful when working
48
- with `Kafka::FFI` directly, see
55
+ with `Kafka::FFI` directly, see
49
56
  [the librdkafka documentation](https://github.com/edenhill/librdkafka/blob/master/INTRODUCTION.md).
50
57
 
51
- ### Sending Message to a Topic
58
+ ### Sending a Message to a Topic
52
59
 
53
60
  ```ruby
54
61
  require "kafka"
@@ -61,8 +68,17 @@ event = { time: Time.now, status: "success" }
61
68
  result = producer.produce("events", event.to_json)
62
69
 
63
70
  # Wait for the delivery to confirm that publishing was successful.
64
- # result.wait
65
- # result.successful?
71
+ result.wait
72
+ result.successful?
73
+
74
+ # Provide a callback to be called when the delivery status is ready.
75
+ producer.produce("events", event.to_json) do |result|
76
+ StatsD.increment("kafka.total")
77
+
78
+ if result.error?
79
+ StatsD.increment("kafka.errors")
80
+ end
81
+ end
66
82
  ```
67
83
 
68
84
  ### Consuming Messages from a Topic
@@ -144,11 +160,25 @@ amount of time we have available to invest. Embracing memory management and
144
160
  building clean separations between layers should reduce the burden to implement
145
161
  new bindings as the rules and responsibilities of each layer are clear.
146
162
 
163
+ ## Thread / Fork Safety
164
+
165
+ The `Producer` is thread safe for publishing messages but should only be closed
166
+ from a single thread. While the `Consumer` is thread safe for calls to `#poll`
167
+ only one message can be in flight at a time, causing the threads to serialize.
168
+ Instead, create a single consumer for each thread.
169
+
170
+ Kafka _is not_ `fork` safe. Make sure to close any Producers or Consumer before
171
+ forking and rebuild them after forking the new process.
172
+
173
+ ## Compatibility
174
+
175
+ Kafka requires Ruby 2.5+ and is tested against Ruby 2.5+ and Kafka 2.1+.
176
+
147
177
  ## Development
148
178
 
149
- To get started with development make sure to have docker, docker-compose, and
150
- [kafkacat](https://github.com/edenhill/kafkacat) installed as they make getting
151
- up to speed easier.
179
+ To get started with development make sure to have `docker`, `docker-compose`, and
180
+ [`kafkacat`](https://github.com/edenhill/kafkacat) installed as they make getting
181
+ up to speed easier. Some rake tasks depend on `ctags`.
152
182
 
153
183
  Before running the test, start a Kafka broker instance
154
184
 
@@ -175,9 +205,11 @@ the [code of conduct](https://github.com/deadmanssnitch/kafka/blob/master/CODE_O
175
205
 
176
206
  ## License
177
207
 
178
- The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
208
+ The gem is available as open source under the terms of the
209
+ [MIT License](https://opensource.org/licenses/MIT).
179
210
 
180
211
  ## Code of Conduct
181
212
 
182
- Everyone interacting in the Kafka project's codebases and issue trackers are expected to follow the
213
+ Everyone interacting in the Kafka project's codebases and issue trackers are
214
+ expected to follow the
183
215
  [code of conduct](https://github.com/deadmanssnitch/kafka/blob/master/CODE_OF_CONDUCT.md).
data/Rakefile CHANGED
@@ -15,14 +15,13 @@ end
15
15
  task default: [:ext, :spec]
16
16
 
17
17
  namespace :ffi do
18
- desc "Lists the librdkafka functions that have not been implemented in Kafka::FFI"
19
- task :missing do
20
- require_relative "lib/kafka/version"
18
+ require_relative "lib/kafka/version"
21
19
 
22
- require "uri"
23
- require "net/http"
24
- require "tempfile"
20
+ require "uri"
21
+ require "net/http"
22
+ require "tempfile"
25
23
 
24
+ def header
26
25
  header = Tempfile.new(["rdkafka", ".h"])
27
26
 
28
27
  # Fetch the header for the pinned version of librdkafka. rdkafka.h contains
@@ -32,7 +31,14 @@ namespace :ffi do
32
31
  header.write(resp)
33
32
  header.close
34
33
 
35
- all = `ctags -x --sort=yes --kinds-C=pf #{header.path} | awk '{ print $1 }'`
34
+ at_exit { header.unlink }
35
+
36
+ header.path
37
+ end
38
+
39
+ desc "Lists the librdkafka functions that have not been implemented in Kafka::FFI"
40
+ task :missing do
41
+ all = `ctags -x --sort=yes --kinds-C=pf #{header} | awk '{ print $1 }'`
36
42
  all = all.split("\n")
37
43
 
38
44
  ffi_path = File.expand_path("lib/kafka/ffi.rb", __dir__)
@@ -41,8 +47,6 @@ namespace :ffi do
41
47
 
42
48
  missing = all - implemented
43
49
  puts missing
44
- ensure
45
- header.unlink
46
50
  end
47
51
 
48
52
  desc "Prints the list of implemented librdkafka functions"
@@ -50,6 +54,29 @@ namespace :ffi do
50
54
  ffi_path = File.expand_path("lib/kafka/ffi.rb", __dir__)
51
55
  puts `grep -o -h -P '^\\s+attach_function\\s+:\\Krd_kafka_\\w+' #{ffi_path} | sort`
52
56
  end
57
+
58
+ namespace :sync do
59
+ desc "Update ffi.rb with all errors defined in rdkafka.h"
60
+ task :errors do
61
+ ffi_path = File.expand_path("lib/kafka/ffi.rb", __dir__)
62
+
63
+ cmd = [
64
+ # Find all of the enumerator types in the header
65
+ "ctags -x --sort=no --kinds-C=e #{header}",
66
+
67
+ # Reduce it to just RD_KAFKA_RESP_ERR_* and their values
68
+ "grep -o -P 'RD_KAFKA_RESP_ERR_\\w+ = -?\\d+'",
69
+
70
+ # Add spacing to the constants so they line up correctly.
71
+ "sed -e 's/^/ /'",
72
+
73
+ # Delete any existing error constants then append the generated result.
74
+ "sed -e '/^\\s\\+RD_KAFKA_RESP_ERR_.\\+=.\\+/d' -e '/Response Errors/r /dev/stdin' #{ffi_path}",
75
+ ].join(" | ")
76
+
77
+ File.write(ffi_path, `#{cmd}`, mode: "w")
78
+ end
79
+ end
53
80
  end
54
81
 
55
82
  namespace :kafka do
@@ -64,6 +91,7 @@ namespace :kafka do
64
91
 
65
92
  desc "Shutdown the development Kafka instance"
66
93
  task :down do
67
- sh "docker-compose -p ruby_kafka_dev down"
94
+ compose = Dir["spec/support/kafka-*.yml"].max
95
+ sh "docker-compose -p ruby_kafka_dev -f #{compose} down"
68
96
  end
69
97
  end
@@ -54,7 +54,7 @@ module Kafka
54
54
  @client.consumer_poll(timeout, &block)
55
55
  end
56
56
 
57
- # @param msg [Consumer::Message]
57
+ # @param msg [Kafka::FFI::Message]
58
58
  def commit(msg, async: false)
59
59
  list = Kafka::FFI::TopicPartitionList.new(1)
60
60
 
@@ -66,7 +66,7 @@ module Kafka
66
66
  list.destroy
67
67
  end
68
68
 
69
- # Gracefully shutdown the consumer and it's connections.
69
+ # Gracefully shutdown the consumer and its connections.
70
70
  #
71
71
  # @note After calling #close it is unsafe to call any other method on the
72
72
  # Consumer.
@@ -9,6 +9,26 @@ module Kafka
9
9
  # @see rdkafka.h RD_KAFKA_RESP_ERR_*
10
10
  # @see rdkafka.h rd_kafka_resp_err_t
11
11
  class ::Kafka::ResponseError < Error
12
+ def self.new(code, message = nil)
13
+ klass =
14
+ case code
15
+ when Kafka::FFI::RD_KAFKA_RESP_ERR__QUEUE_FULL
16
+ QueueFullError
17
+ else
18
+ ResponseError
19
+ end
20
+
21
+ error = klass.allocate
22
+ error.send(:initialize, code, message)
23
+ error
24
+ end
25
+
26
+ # exception is called instead of `new` when using the form:
27
+ # raise Kafka::ResponseError, code
28
+ def self.exception(code)
29
+ new(code)
30
+ end
31
+
12
32
  # @attr code [Integer] Error code as defined by librdkafka.
13
33
  attr_reader :code
14
34
 
@@ -41,4 +61,10 @@ module Kafka
41
61
  @message || ::Kafka::FFI.rd_kafka_err2str(@code)
42
62
  end
43
63
  end
64
+
65
+ # QueueFullError is raised when producing messages and the queue of messages
66
+ # pending delivery to topics has reached it's size limit.
67
+ #
68
+ # @see Config option `queue.buffering.max.messages`
69
+ class QueueFullError < ResponseError; end
44
70
  end
@@ -59,12 +59,145 @@ module Kafka
59
59
  ]
60
60
 
61
61
  # Response Errors
62
+ RD_KAFKA_RESP_ERR__BEGIN = -200
63
+ RD_KAFKA_RESP_ERR__BAD_MSG = -199
64
+ RD_KAFKA_RESP_ERR__BAD_COMPRESSION = -198
65
+ RD_KAFKA_RESP_ERR__DESTROY = -197
66
+ RD_KAFKA_RESP_ERR__FAIL = -196
67
+ RD_KAFKA_RESP_ERR__TRANSPORT = -195
68
+ RD_KAFKA_RESP_ERR__CRIT_SYS_RESOURCE = -194
69
+ RD_KAFKA_RESP_ERR__RESOLVE = -193
70
+ RD_KAFKA_RESP_ERR__MSG_TIMED_OUT = -192
71
+ RD_KAFKA_RESP_ERR__PARTITION_EOF = -191
72
+ RD_KAFKA_RESP_ERR__UNKNOWN_PARTITION = -190
73
+ RD_KAFKA_RESP_ERR__FS = -189
74
+ RD_KAFKA_RESP_ERR__UNKNOWN_TOPIC = -188
75
+ RD_KAFKA_RESP_ERR__ALL_BROKERS_DOWN = -187
76
+ RD_KAFKA_RESP_ERR__INVALID_ARG = -186
62
77
  RD_KAFKA_RESP_ERR__TIMED_OUT = -185
78
+ RD_KAFKA_RESP_ERR__QUEUE_FULL = -184
79
+ RD_KAFKA_RESP_ERR__ISR_INSUFF = -183
80
+ RD_KAFKA_RESP_ERR__NODE_UPDATE = -182
81
+ RD_KAFKA_RESP_ERR__SSL = -181
82
+ RD_KAFKA_RESP_ERR__WAIT_COORD = -180
83
+ RD_KAFKA_RESP_ERR__UNKNOWN_GROUP = -179
84
+ RD_KAFKA_RESP_ERR__IN_PROGRESS = -178
85
+ RD_KAFKA_RESP_ERR__PREV_IN_PROGRESS = -177
86
+ RD_KAFKA_RESP_ERR__EXISTING_SUBSCRIPTION = -176
63
87
  RD_KAFKA_RESP_ERR__ASSIGN_PARTITIONS = -175
64
88
  RD_KAFKA_RESP_ERR__REVOKE_PARTITIONS = -174
89
+ RD_KAFKA_RESP_ERR__CONFLICT = -173
90
+ RD_KAFKA_RESP_ERR__STATE = -172
91
+ RD_KAFKA_RESP_ERR__UNKNOWN_PROTOCOL = -171
92
+ RD_KAFKA_RESP_ERR__NOT_IMPLEMENTED = -170
93
+ RD_KAFKA_RESP_ERR__AUTHENTICATION = -169
65
94
  RD_KAFKA_RESP_ERR__NO_OFFSET = -168
95
+ RD_KAFKA_RESP_ERR__OUTDATED = -167
96
+ RD_KAFKA_RESP_ERR__TIMED_OUT_QUEUE = -166
97
+ RD_KAFKA_RESP_ERR__UNSUPPORTED_FEATURE = -165
98
+ RD_KAFKA_RESP_ERR__WAIT_CACHE = -164
99
+ RD_KAFKA_RESP_ERR__INTR = -163
100
+ RD_KAFKA_RESP_ERR__KEY_SERIALIZATION = -162
101
+ RD_KAFKA_RESP_ERR__VALUE_SERIALIZATION = -161
102
+ RD_KAFKA_RESP_ERR__KEY_DESERIALIZATION = -160
103
+ RD_KAFKA_RESP_ERR__VALUE_DESERIALIZATION = -159
104
+ RD_KAFKA_RESP_ERR__PARTIAL = -158
105
+ RD_KAFKA_RESP_ERR__READ_ONLY = -157
66
106
  RD_KAFKA_RESP_ERR__NOENT = -156
107
+ RD_KAFKA_RESP_ERR__UNDERFLOW = -155
108
+ RD_KAFKA_RESP_ERR__INVALID_TYPE = -154
109
+ RD_KAFKA_RESP_ERR__RETRY = -153
110
+ RD_KAFKA_RESP_ERR__PURGE_QUEUE = -152
111
+ RD_KAFKA_RESP_ERR__PURGE_INFLIGHT = -151
67
112
  RD_KAFKA_RESP_ERR__FATAL = -150
113
+ RD_KAFKA_RESP_ERR__INCONSISTENT = -149
114
+ RD_KAFKA_RESP_ERR__GAPLESS_GUARANTEE = -148
115
+ RD_KAFKA_RESP_ERR__MAX_POLL_EXCEEDED = -147
116
+ RD_KAFKA_RESP_ERR__UNKNOWN_BROKER = -146
117
+ RD_KAFKA_RESP_ERR__END = -100
118
+ RD_KAFKA_RESP_ERR_UNKNOWN = -1
119
+ RD_KAFKA_RESP_ERR_NO_ERROR = 0
120
+ RD_KAFKA_RESP_ERR_OFFSET_OUT_OF_RANGE = 1
121
+ RD_KAFKA_RESP_ERR_INVALID_MSG = 2
122
+ RD_KAFKA_RESP_ERR_UNKNOWN_TOPIC_OR_PART = 3
123
+ RD_KAFKA_RESP_ERR_INVALID_MSG_SIZE = 4
124
+ RD_KAFKA_RESP_ERR_LEADER_NOT_AVAILABLE = 5
125
+ RD_KAFKA_RESP_ERR_NOT_LEADER_FOR_PARTITION = 6
126
+ RD_KAFKA_RESP_ERR_REQUEST_TIMED_OUT = 7
127
+ RD_KAFKA_RESP_ERR_BROKER_NOT_AVAILABLE = 8
128
+ RD_KAFKA_RESP_ERR_REPLICA_NOT_AVAILABLE = 9
129
+ RD_KAFKA_RESP_ERR_MSG_SIZE_TOO_LARGE = 10
130
+ RD_KAFKA_RESP_ERR_STALE_CTRL_EPOCH = 11
131
+ RD_KAFKA_RESP_ERR_OFFSET_METADATA_TOO_LARGE = 12
132
+ RD_KAFKA_RESP_ERR_NETWORK_EXCEPTION = 13
133
+ RD_KAFKA_RESP_ERR_COORDINATOR_LOAD_IN_PROGRESS = 14
134
+ RD_KAFKA_RESP_ERR_COORDINATOR_NOT_AVAILABLE = 15
135
+ RD_KAFKA_RESP_ERR_NOT_COORDINATOR = 16
136
+ RD_KAFKA_RESP_ERR_TOPIC_EXCEPTION = 17
137
+ RD_KAFKA_RESP_ERR_RECORD_LIST_TOO_LARGE = 18
138
+ RD_KAFKA_RESP_ERR_NOT_ENOUGH_REPLICAS = 19
139
+ RD_KAFKA_RESP_ERR_NOT_ENOUGH_REPLICAS_AFTER_APPEND = 20
140
+ RD_KAFKA_RESP_ERR_INVALID_REQUIRED_ACKS = 21
141
+ RD_KAFKA_RESP_ERR_ILLEGAL_GENERATION = 22
142
+ RD_KAFKA_RESP_ERR_INCONSISTENT_GROUP_PROTOCOL = 23
143
+ RD_KAFKA_RESP_ERR_INVALID_GROUP_ID = 24
144
+ RD_KAFKA_RESP_ERR_UNKNOWN_MEMBER_ID = 25
145
+ RD_KAFKA_RESP_ERR_INVALID_SESSION_TIMEOUT = 26
146
+ RD_KAFKA_RESP_ERR_REBALANCE_IN_PROGRESS = 27
147
+ RD_KAFKA_RESP_ERR_INVALID_COMMIT_OFFSET_SIZE = 28
148
+ RD_KAFKA_RESP_ERR_TOPIC_AUTHORIZATION_FAILED = 29
149
+ RD_KAFKA_RESP_ERR_GROUP_AUTHORIZATION_FAILED = 30
150
+ RD_KAFKA_RESP_ERR_CLUSTER_AUTHORIZATION_FAILED = 31
151
+ RD_KAFKA_RESP_ERR_INVALID_TIMESTAMP = 32
152
+ RD_KAFKA_RESP_ERR_UNSUPPORTED_SASL_MECHANISM = 33
153
+ RD_KAFKA_RESP_ERR_ILLEGAL_SASL_STATE = 34
154
+ RD_KAFKA_RESP_ERR_UNSUPPORTED_VERSION = 35
155
+ RD_KAFKA_RESP_ERR_TOPIC_ALREADY_EXISTS = 36
156
+ RD_KAFKA_RESP_ERR_INVALID_PARTITIONS = 37
157
+ RD_KAFKA_RESP_ERR_INVALID_REPLICATION_FACTOR = 38
158
+ RD_KAFKA_RESP_ERR_INVALID_REPLICA_ASSIGNMENT = 39
159
+ RD_KAFKA_RESP_ERR_INVALID_CONFIG = 40
160
+ RD_KAFKA_RESP_ERR_NOT_CONTROLLER = 41
161
+ RD_KAFKA_RESP_ERR_INVALID_REQUEST = 42
162
+ RD_KAFKA_RESP_ERR_UNSUPPORTED_FOR_MESSAGE_FORMAT = 43
163
+ RD_KAFKA_RESP_ERR_POLICY_VIOLATION = 44
164
+ RD_KAFKA_RESP_ERR_OUT_OF_ORDER_SEQUENCE_NUMBER = 45
165
+ RD_KAFKA_RESP_ERR_DUPLICATE_SEQUENCE_NUMBER = 46
166
+ RD_KAFKA_RESP_ERR_INVALID_PRODUCER_EPOCH = 47
167
+ RD_KAFKA_RESP_ERR_INVALID_TXN_STATE = 48
168
+ RD_KAFKA_RESP_ERR_INVALID_PRODUCER_ID_MAPPING = 49
169
+ RD_KAFKA_RESP_ERR_INVALID_TRANSACTION_TIMEOUT = 50
170
+ RD_KAFKA_RESP_ERR_CONCURRENT_TRANSACTIONS = 51
171
+ RD_KAFKA_RESP_ERR_TRANSACTION_COORDINATOR_FENCED = 52
172
+ RD_KAFKA_RESP_ERR_TRANSACTIONAL_ID_AUTHORIZATION_FAILED = 53
173
+ RD_KAFKA_RESP_ERR_SECURITY_DISABLED = 54
174
+ RD_KAFKA_RESP_ERR_OPERATION_NOT_ATTEMPTED = 55
175
+ RD_KAFKA_RESP_ERR_KAFKA_STORAGE_ERROR = 56
176
+ RD_KAFKA_RESP_ERR_LOG_DIR_NOT_FOUND = 57
177
+ RD_KAFKA_RESP_ERR_SASL_AUTHENTICATION_FAILED = 58
178
+ RD_KAFKA_RESP_ERR_UNKNOWN_PRODUCER_ID = 59
179
+ RD_KAFKA_RESP_ERR_REASSIGNMENT_IN_PROGRESS = 60
180
+ RD_KAFKA_RESP_ERR_DELEGATION_TOKEN_AUTH_DISABLED = 61
181
+ RD_KAFKA_RESP_ERR_DELEGATION_TOKEN_NOT_FOUND = 62
182
+ RD_KAFKA_RESP_ERR_DELEGATION_TOKEN_OWNER_MISMATCH = 63
183
+ RD_KAFKA_RESP_ERR_DELEGATION_TOKEN_REQUEST_NOT_ALLOWED = 64
184
+ RD_KAFKA_RESP_ERR_DELEGATION_TOKEN_AUTHORIZATION_FAILED = 65
185
+ RD_KAFKA_RESP_ERR_DELEGATION_TOKEN_EXPIRED = 66
186
+ RD_KAFKA_RESP_ERR_INVALID_PRINCIPAL_TYPE = 67
187
+ RD_KAFKA_RESP_ERR_NON_EMPTY_GROUP = 68
188
+ RD_KAFKA_RESP_ERR_GROUP_ID_NOT_FOUND = 69
189
+ RD_KAFKA_RESP_ERR_FETCH_SESSION_ID_NOT_FOUND = 70
190
+ RD_KAFKA_RESP_ERR_INVALID_FETCH_SESSION_EPOCH = 71
191
+ RD_KAFKA_RESP_ERR_LISTENER_NOT_FOUND = 72
192
+ RD_KAFKA_RESP_ERR_TOPIC_DELETION_DISABLED = 73
193
+ RD_KAFKA_RESP_ERR_FENCED_LEADER_EPOCH = 74
194
+ RD_KAFKA_RESP_ERR_UNKNOWN_LEADER_EPOCH = 75
195
+ RD_KAFKA_RESP_ERR_UNSUPPORTED_COMPRESSION_TYPE = 76
196
+ RD_KAFKA_RESP_ERR_STALE_BROKER_EPOCH = 77
197
+ RD_KAFKA_RESP_ERR_OFFSET_NOT_AVAILABLE = 78
198
+ RD_KAFKA_RESP_ERR_MEMBER_ID_REQUIRED = 79
199
+ RD_KAFKA_RESP_ERR_PREFERRED_LEADER_NOT_AVAILABLE = 80
200
+ RD_KAFKA_RESP_ERR_GROUP_MAX_SIZE_REACHED = 81
68
201
 
69
202
  # @see rdkafka.h rd_kafka_resp_err_t
70
203
  enum :error_code, [
@@ -276,7 +409,7 @@ module Kafka
276
409
  attach_function :rd_kafka_type, [Client], :kafka_type
277
410
  attach_function :rd_kafka_name, [Client], :string
278
411
  attach_function :rd_kafka_memberid, [Client], :pointer
279
- attach_function :rd_kafka_clusterid, [Client], :pointer
412
+ attach_function :rd_kafka_clusterid, [Client, :timeout_ms], :pointer, blocking: true
280
413
  attach_function :rd_kafka_controllerid, [Client, :timeout_ms], :broker_id, blocking: true
281
414
  attach_function :rd_kafka_default_topic_conf_dup, [Client], TopicConfig
282
415
  attach_function :rd_kafka_conf, [Client], Config
@@ -440,14 +573,14 @@ module Kafka
440
573
  attach_function :rd_kafka_consume_start, [Topic, :partition, :offset], :int
441
574
  attach_function :rd_kafka_consume_start_queue, [Topic, :partition, :offset, Queue], :int
442
575
  attach_function :rd_kafka_consume_stop, [Topic, :partition], :int
443
- attach_function :rd_kafka_consume, [Topic, :partition, :timeout_ms], Message.by_ref
444
- attach_function :rd_kafka_consume_batch, [Topic, :partition, :timeout_ms, :pointer, :size_t], :ssize_t
445
- attach_function :rd_kafka_consume_callback, [Topic, :partition, :timeout_ms, :consume_cb, :pointer], :int
576
+ attach_function :rd_kafka_consume, [Topic, :partition, :timeout_ms], Message.by_ref, blocking: true
577
+ attach_function :rd_kafka_consume_batch, [Topic, :partition, :timeout_ms, :pointer, :size_t], :ssize_t, blocking: true
578
+ attach_function :rd_kafka_consume_callback, [Topic, :partition, :timeout_ms, :consume_cb, :pointer], :int, blocking: true
446
579
 
447
580
  ### Simple Consumer Queue API
448
- attach_function :rd_kafka_consume_queue, [Queue, :timeout_ms], Message.by_ref
449
- attach_function :rd_kafka_consume_batch_queue, [Queue, :timeout_ms, :pointer, :size_t], :ssize_t
450
- attach_function :rd_kafka_consume_callback_queue, [Queue, :timeout_ms, :consume_cb, :pointer], :int
581
+ attach_function :rd_kafka_consume_queue, [Queue, :timeout_ms], Message.by_ref, blocking: true
582
+ attach_function :rd_kafka_consume_batch_queue, [Queue, :timeout_ms, :pointer, :size_t], :ssize_t, blocking: true
583
+ attach_function :rd_kafka_consume_callback_queue, [Queue, :timeout_ms, :consume_cb, :pointer], :int, blocking: true
451
584
 
452
585
  ### Simple Consumer Topic + Partition API
453
586
  attach_function :rd_kafka_offset_store, [Topic, :partition, :offset], :error_code
@@ -460,17 +593,25 @@ module Kafka
460
593
  attach_function :rd_kafka_flush, [Producer, :timeout_ms], :error_code, blocking: true
461
594
  attach_function :rd_kafka_purge, [Producer, :int], :error_code, blocking: true
462
595
 
596
+ ## Partitioners
597
+
598
+ attach_function :rd_kafka_msg_partitioner_random, [Topic, :string, :size_t, :int32, Opaque, Opaque], :partition
599
+ attach_function :rd_kafka_msg_partitioner_consistent, [Topic, :string, :size_t, :int32, Opaque, Opaque], :partition
600
+ attach_function :rd_kafka_msg_partitioner_consistent_random, [Topic, :string, :size_t, :int32, Opaque, Opaque], :partition
601
+ attach_function :rd_kafka_msg_partitioner_murmur2, [Topic, :string, :size_t, :int32, Opaque, Opaque], :partition
602
+ attach_function :rd_kafka_msg_partitioner_murmur2_random, [Topic, :string, :size_t, :int32, Opaque, Opaque], :partition
603
+
463
604
  # Metadata
464
- attach_function :rd_kafka_metadata, [Client, :bool, Topic, :pointer, :timeout_ms], :error_code
605
+ attach_function :rd_kafka_metadata, [Client, :bool, Topic, :pointer, :timeout_ms], :error_code, blocking: true
465
606
  attach_function :rd_kafka_metadata_destroy, [Metadata.by_ref], :void
466
607
 
467
608
  # Group List
468
- attach_function :rd_kafka_list_groups, [Client, :string, :pointer, :timeout_ms], :error_code
609
+ attach_function :rd_kafka_list_groups, [Client, :string, :pointer, :timeout_ms], :error_code, blocking: true
469
610
  attach_function :rd_kafka_group_list_destroy, [GroupList.by_ref], :void
470
611
 
471
612
  # Queue
472
613
  attach_function :rd_kafka_queue_new, [Client], Queue
473
- attach_function :rd_kafka_queue_poll, [Queue, :timeout_ms], Event
614
+ attach_function :rd_kafka_queue_poll, [Queue, :timeout_ms], Event, blocking: true
474
615
  attach_function :rd_kafka_queue_get_main, [Client], Queue
475
616
  attach_function :rd_kafka_queue_get_consumer, [Consumer], Queue
476
617
  attach_function :rd_kafka_queue_get_partition, [Consumer, :topic, :partition], Queue
@@ -17,7 +17,7 @@ module Kafka::FFI
17
17
  #
18
18
  # @note Naming this is hard and librdkafka primarily just refers to it as "a
19
19
  # handle" to an instance. It's more akin to an internal service and this
20
- # Client talks the API to that service.
20
+ # Client talks the API of that service.
21
21
  class Client < OpaquePointer
22
22
  require "kafka/ffi/consumer"
23
23
  require "kafka/ffi/producer"
@@ -101,6 +101,9 @@ module Kafka::FFI
101
101
  def initialize(ptr)
102
102
  super(ptr)
103
103
 
104
+ # Caches Topics created on the first call to #topic below. Topics need to
105
+ # be destroyed before destroying the Client. We keep a set in the client
106
+ # so end users don't need to think about it.
104
107
  @topics = {}
105
108
  end
106
109
 
@@ -108,6 +111,8 @@ module Kafka::FFI
108
111
  #
109
112
  # @note The returned config is read-only and tied to the lifetime of the
110
113
  # Client. Don't try to modify or destroy the config.
114
+ #
115
+ # @return [Config] Client's current config. Read-only.
111
116
  def config
112
117
  ::Kafka::FFI.rd_kafka_conf(self)
113
118
  end
@@ -122,17 +127,23 @@ module Kafka::FFI
122
127
  # Retrieves the Client's Cluster ID
123
128
  #
124
129
  # @note requires config `api.version.request` set to true
125
- def cluster_id
126
- id = ::Kafka::FFI.rd_kafka_clusterid(self)
127
-
128
- if id.null?
130
+ #
131
+ # @param timeout [Integer] Maximum time to wait in milliseconds. Use 0 for
132
+ # non-bloack call that will return immediately if metadata is cached.
133
+ #
134
+ # @return [nil] Cluster ID not available
135
+ # @return [String] ID of the Cluster
136
+ def cluster_id(timeout: 1000)
137
+ ptr = ::Kafka::FFI.rd_kafka_clusterid(self, timeout)
138
+ if ptr.null?
129
139
  return nil
130
140
  end
131
141
 
132
- id.read_string
133
- ensure
134
- if !id.null?
135
- ::Kafka::FFI.rd_kafka_mem_free(self, id)
142
+ begin
143
+ ptr.read_string
144
+ ensure
145
+ # Documentation explicitly says that the string needs to be freed.
146
+ ::Kafka::FFI.rd_kafka_mem_free(self, ptr)
136
147
  end
137
148
  end
138
149
 
@@ -161,25 +172,23 @@ module Kafka::FFI
161
172
  # Topic can only be configured at creation.
162
173
  #
163
174
  # @raise [Kafka::ResponseError] Error occurred creating the topic
164
- # @raise [TopicAlreadyConfiguredError] Passed a config for a topic that has
165
- # already been configured.
175
+ # @raise [Kafka::FFI::TopicAlreadyConfiguredError] Passed a config for a
176
+ # topic that has already been configured.
166
177
  #
167
178
  # @return [Topic] Topic instance
168
- # error.
169
179
  def topic(name, config = nil)
170
180
  topic = @topics[name]
171
181
  if topic
172
182
  if config
173
183
  # Make this an exception because it's probably a programmer error
174
- # that _should_ only primarily happen during development due to
184
+ # that _should_ primarily happen during development due to
175
185
  # misunderstanding the semantics.
176
- raise TopicAlreadyConfigured, "#{name} was already configured"
186
+ raise ::Kafka::FFI::TopicAlreadyConfiguredError, "#{name} was already configured"
177
187
  end
178
188
 
179
189
  return topic
180
190
  end
181
191
 
182
- # @todo - Keep list of topics and manage their lifecycle?
183
192
  topic = ::Kafka::FFI.rd_kafka_topic_new(self, name, config)
184
193
  if topic.nil?
185
194
  raise ::Kafka::ResponseError, ::Kafka::FFI.rd_kafka_last_error
@@ -353,7 +353,7 @@ module Kafka::FFI
353
353
  # @raise [ConfigError] Certificate was not properly encoded or librdkafka
354
354
  # was not compiled with SSL/TLS.
355
355
  def set_ssl_cert(cert_type, cert_enc, certificate)
356
- error = ::MemoryPointer.new(:char, 512)
356
+ error = ::FFI::MemoryPointer.new(:char, 512)
357
357
 
358
358
  err = ::Kafka::FFI.rd_kafka_conf_set_ssl_cert(cert_type, cert_enc, certificate, certificate.bytesize, error, error.size)
359
359
  if err != :ok
@@ -106,7 +106,7 @@ module Kafka::FFI
106
106
  # @raise [Kafka::ResponseError] Error occurred parsing headers
107
107
  #
108
108
  # @return [nil] Message does not have any headers
109
- # @return [Message::Headers] Set of headers
109
+ # @return [Message::Header] Set of headers
110
110
  def headers
111
111
  ptr = ::FFI::MemoryPointer.new(:pointer)
112
112
 
@@ -116,7 +116,7 @@ module Kafka::FFI
116
116
  if ptr.null?
117
117
  nil
118
118
  else
119
- Message::Headers.new(ptr)
119
+ Message::Header.new(ptr)
120
120
  end
121
121
  when RD_KAFKA_RESP_ERR__NOENT
122
122
  # Messages does not have headers
@@ -136,7 +136,7 @@ module Kafka::FFI
136
136
  # @raise [Kafka::ResponseError] Error occurred parsing headers
137
137
  #
138
138
  # @return [nil] Message does not have any headers
139
- # @return [Message::Headers] Set of headers
139
+ # @return [Message::Header] Set of headers
140
140
  def detach_headers
141
141
  ptr = ::FFI::MemoryPointer.new(:pointer)
142
142
 
@@ -146,7 +146,7 @@ module Kafka::FFI
146
146
  if ptr.null?
147
147
  nil
148
148
  else
149
- Message::Headers.new(ptr)
149
+ Message::Header.new(ptr)
150
150
  end
151
151
  when RD_KAFKA_RESP_ERR__NOENT
152
152
  # Messages does not have headers
@@ -41,8 +41,14 @@ module Kafka::FFI
41
41
  @registry.delete(opaque.pointer.address)
42
42
  end
43
43
 
44
- # @param value [Opaque]
44
+ # @param value [Opaque, nil]
45
+ #
46
+ # @return [FFI::Pointer] Pointer referencing the Opauqe value.
45
47
  def to_native(value, _ctx)
48
+ if value.nil?
49
+ return ::FFI::Pointer::NULL
50
+ end
51
+
46
52
  value.pointer
47
53
  end
48
54
 
@@ -33,6 +33,10 @@ module Kafka::FFI
33
33
  # which will be available as Message#opaque in callbacks. The application
34
34
  # MUST call #free on the Opaque once the final callback has been
35
35
  # triggered to avoid leaking memory.
36
+ #
37
+ # @raise [Kafka::QueueFullError] Number of messages queued for delivery has
38
+ # exceeded capacity. See `queue.buffering.max.messages` config setting to
39
+ # adjust.
36
40
  def produce(topic, payload, key: nil, partition: nil, headers: nil, timestamp: nil, opaque: nil)
37
41
  args = [
38
42
  # Ensure librdkafka copies the payload into its own memory since the
@@ -59,9 +59,13 @@ class Kafka::Producer
59
59
  @done
60
60
  end
61
61
 
62
- # @return [Boolean] Is the report for an error?
62
+ # Returns if the delivery errored
63
+ #
64
+ # @see #see
65
+ #
66
+ # @return [Boolean] True when the delivery failed with an error.
63
67
  def error?
64
- error.nil?
68
+ received? && !successful?
65
69
  end
66
70
 
67
71
  # Returns if the delivery was successful
@@ -69,7 +73,7 @@ class Kafka::Producer
69
73
  # @return [Boolean] True when the report was delivered to the cluster
70
74
  # successfully.
71
75
  def successful?
72
- !error
76
+ received? && error.nil?
73
77
  end
74
78
 
75
79
  # @private
@@ -97,6 +101,8 @@ class Kafka::Producer
97
101
  if @callback
98
102
  @callback.call(self)
99
103
  end
104
+
105
+ nil
100
106
  end
101
107
 
102
108
  # Wait for a report to be received for the delivery from the cluster.
@@ -1,7 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Kafka
4
- VERSION = "0.5.1"
4
+ VERSION = "0.5.2"
5
5
 
6
6
  LIBRDKAFKA_VERSION = "1.3.0"
7
7
  LIBRDKAFKA_CHECKSUM = "465cab533ebc5b9ca8d97c90ab69e0093460665ebaf38623209cf343653c76d2"
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: kafka
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.5.1
4
+ version: 0.5.2
5
5
  platform: ruby
6
6
  authors:
7
7
  - Chris Gaffney
8
8
  autorequire:
9
9
  bindir: bin
10
10
  cert_chain: []
11
- date: 2020-01-22 00:00:00.000000000 Z
11
+ date: 2020-01-27 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: ffi