google-cloud-logging 1.5.7 → 1.6.0

Sign up to get free protection for your applications and to get access to all the features.
Files changed (39) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +40 -0
  3. data/CONTRIBUTING.md +1 -1
  4. data/lib/google-cloud-logging.rb +7 -5
  5. data/lib/google/cloud/logging.rb +22 -8
  6. data/lib/google/cloud/logging/async_writer.rb +308 -187
  7. data/lib/google/cloud/logging/convert.rb +15 -9
  8. data/lib/google/cloud/logging/entry.rb +43 -13
  9. data/lib/google/cloud/logging/entry/source_location.rb +3 -3
  10. data/lib/google/cloud/logging/errors.rb +101 -0
  11. data/lib/google/cloud/logging/log/list.rb +1 -1
  12. data/lib/google/cloud/logging/logger.rb +6 -4
  13. data/lib/google/cloud/logging/middleware.rb +24 -11
  14. data/lib/google/cloud/logging/project.rb +38 -15
  15. data/lib/google/cloud/logging/rails.rb +56 -5
  16. data/lib/google/cloud/logging/resource.rb +1 -1
  17. data/lib/google/cloud/logging/service.rb +42 -32
  18. data/lib/google/cloud/logging/v2/config_service_v2_client.rb +1 -1
  19. data/lib/google/cloud/logging/v2/credentials.rb +1 -1
  20. data/lib/google/cloud/logging/v2/doc/google/api/distribution.rb +1 -37
  21. data/lib/google/cloud/logging/v2/doc/google/api/label.rb +1 -1
  22. data/lib/google/cloud/logging/v2/doc/google/api/metric.rb +1 -13
  23. data/lib/google/cloud/logging/v2/doc/google/api/monitored_resource.rb +1 -1
  24. data/lib/google/cloud/logging/v2/doc/google/logging/type/http_request.rb +1 -1
  25. data/lib/google/cloud/logging/v2/doc/google/logging/v2/log_entry.rb +1 -1
  26. data/lib/google/cloud/logging/v2/doc/google/logging/v2/logging.rb +1 -12
  27. data/lib/google/cloud/logging/v2/doc/google/logging/v2/logging_config.rb +1 -1
  28. data/lib/google/cloud/logging/v2/doc/google/logging/v2/logging_metrics.rb +1 -1
  29. data/lib/google/cloud/logging/v2/doc/google/protobuf/any.rb +1 -1
  30. data/lib/google/cloud/logging/v2/doc/google/protobuf/duration.rb +1 -1
  31. data/lib/google/cloud/logging/v2/doc/google/protobuf/empty.rb +1 -1
  32. data/lib/google/cloud/logging/v2/doc/google/protobuf/field_mask.rb +1 -1
  33. data/lib/google/cloud/logging/v2/doc/google/protobuf/struct.rb +1 -1
  34. data/lib/google/cloud/logging/v2/doc/google/protobuf/timestamp.rb +1 -1
  35. data/lib/google/cloud/logging/v2/logging_service_v2_client.rb +1 -1
  36. data/lib/google/cloud/logging/v2/logging_service_v2_client_config.json +1 -1
  37. data/lib/google/cloud/logging/v2/metrics_service_v2_client.rb +1 -1
  38. data/lib/google/cloud/logging/version.rb +1 -1
  39. metadata +5 -4
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 1e2ccb4454a6582fc62db7993ea414a404812b896a638d38f5648bfc5873cb33
4
- data.tar.gz: 4958689bf6ce20e588f13ffa5995b4c316e0ff8762d33a6a052fe2f2427c1bd1
3
+ metadata.gz: e0666c5ecdf0723fa5c0457bf6fd39c46a23e854204f6b3d3c88c0147c76a8ff
4
+ data.tar.gz: 4ffc01aa4b54a78d1af59681c832a972d5bb65c35d5dc59da3b5366a5e08248e
5
5
  SHA512:
6
- metadata.gz: ec371b02b78e6ae7f128190adfe25d8baf400e9b0dde2b9ea63143fb0df306f945b7148861bd673b09c7a4636a3371626b150773044b2422b1b591ba243164a0
7
- data.tar.gz: b852883c7e539a9bbe74e433583885f08f585235f3cdb7b3abe842363d6b5436a73a0bbbb731f3dd9a7d0600770704fd4ea9d5383babad377e21bcb357a24fc0
6
+ metadata.gz: 02a939f01d6ccf2c220a175732fda505140981feeb8fdc249df25831d6efa747b7aa355f7448f56a2b955b0c17e0c05310bf92e105c60c1f4bccd4ff9a9f3223
7
+ data.tar.gz: 1b94165958561e0f4f19afdde06471c88037ac828ecb97639a2a497eba30a77c9430599b33201495e9d467868df317386e6c2dbfc61e4061d8ace47751972e2b
@@ -1,5 +1,45 @@
1
1
  # Release History
2
2
 
3
+ ### 1.6.0 / 2019-01-22
4
+
5
+ * AsyncWriter buffer entries and make batch API calls
6
+ * Update AsyncWriter to buffer log entries and batch API calls.
7
+ * Maintain backwards compatibility with the previous AsyncWriter's public API,
8
+ although the implementation is changed.
9
+ * Back pressure is applied by limiting the number of queued API calls.
10
+ Errors will now be raised when there are not enough resources.
11
+ * Errors are reported using the AsyncWriter#on_error callback.
12
+ * Pending log entries are sent before the process closes using at_exit.
13
+ * Add Logging on_error configuration.
14
+ * Add default insert_id value for Entry
15
+ * Add Entry.insert_id
16
+ * Add default insert_id value for Entry
17
+ An Entry object is assigned an insert_id when created so that if the
18
+ Entry object gets persisted multiple times it would know its insert_id
19
+ value and not attempt to generate a new one for each persist attempt.
20
+ An Entry object will still be considered empty if the only value it has
21
+ is the insert_id.
22
+ * (This change does not use SecureRandom for performance reasons.)
23
+ * Add Logging trace_sampled
24
+ * Add Entry#trace_sampled attribute
25
+ * Add trace_sampled to Logger::RequestInfo
26
+ * Changes to Rails default Logger
27
+ * Delay updating the Rails default logger until the first web request.
28
+ * This will avoid issues with forking processes and gRPC resources.
29
+ * This is accomplished by adding the on_init argument to middleware.
30
+ * Add Railtie.set_default_logger
31
+ * This method can be called post-fork to update the Rails default logger.
32
+ * Make use of Credentials#project_id
33
+ * Use Credentials#project_id
34
+ If a project_id is not provided, use the value on the Credentials object.
35
+ This value was added in googleauth 0.7.0.
36
+ * Loosen googleauth dependency
37
+ Allow for new releases up to 0.10.
38
+ The googleauth devs have committed to maintanining the current API
39
+ and will not make backwards compatible changes before 0.10.
40
+ * Direct logs for "/healthz" requests to the health check log.
41
+ * Update documentation.
42
+
3
43
  ### 1.5.7 / 2018-11-15
4
44
 
5
45
  * Add Google::Logging::V2::LogEntry#trace_sampled.
@@ -45,7 +45,7 @@ there is a small amount of setup:
45
45
 
46
46
  ```sh
47
47
  $ cd google-cloud-logging/
48
- $ bundle exec rake bundleupdate
48
+ $ bundle update
49
49
  ```
50
50
 
51
51
  ## Console
@@ -67,8 +67,9 @@ module Google
67
67
  # logging = gcloud.logging scope: platform_scope
68
68
  #
69
69
  def logging scope: nil, timeout: nil, client_config: nil
70
- Google::Cloud.logging @project, @keyfile, scope: scope,
71
- timeout: (timeout || @timeout),
70
+ timeout ||= @timeout
71
+ Google::Cloud.logging @project, @keyfile, scope: scope,
72
+ timeout: timeout,
72
73
  client_config: client_config
73
74
  end
74
75
 
@@ -134,16 +135,15 @@ Google::Cloud.configure.add_config! :logging do |config|
134
135
  ENV["LOGGING_PROJECT"]
135
136
  end
136
137
  default_creds = Google::Cloud::Config.deferred do
137
- Google::Cloud::Config.credentials_from_env(
138
+ Google::Cloud::Config.credentials_from_env \
138
139
  "LOGGING_CREDENTIALS", "LOGGING_CREDENTIALS_JSON",
139
140
  "LOGGING_KEYFILE", "LOGGING_KEYFILE_JSON"
140
- )
141
141
  end
142
142
 
143
143
  config.add_field! :project_id, default_project, match: String, allow_nil: true
144
144
  config.add_alias! :project, :project_id
145
145
  config.add_field! :credentials, default_creds,
146
- match: [String, Hash, Google::Auth::Credentials],
146
+ match: [String, Hash, Google::Auth::Credentials],
147
147
  allow_nil: true
148
148
  config.add_alias! :keyfile, :credentials
149
149
  config.add_field! :scope, nil, match: [String, Array]
@@ -156,4 +156,6 @@ Google::Cloud.configure.add_config! :logging do |config|
156
156
  mrconfig.add_field! :type, nil, match: String
157
157
  mrconfig.add_field! :labels, nil, match: Hash
158
158
  end
159
+ config.add_field! :set_default_logger_on_rails_init, nil, enum: [true, false]
160
+ config.add_field! :on_error, nil, match: Proc
159
161
  end
@@ -83,22 +83,25 @@ module Google
83
83
  #
84
84
  def self.new project_id: nil, credentials: nil, scope: nil, timeout: nil,
85
85
  client_config: nil, project: nil, keyfile: nil
86
- project_id ||= (project || default_project_id)
87
- project_id = project_id.to_s # Always cast to a string
88
- raise ArgumentError, "project_id is missing" if project_id.empty?
89
-
90
- scope ||= configure.scope
91
- timeout ||= configure.timeout
86
+ project_id ||= (project || default_project_id)
87
+ scope ||= configure.scope
88
+ timeout ||= configure.timeout
92
89
  client_config ||= configure.client_config
90
+ credentials ||= (keyfile || default_credentials(scope: scope))
93
91
 
94
- credentials ||= (keyfile || default_credentials(scope: scope))
95
92
  unless credentials.is_a? Google::Auth::Credentials
96
93
  credentials = Logging::Credentials.new credentials, scope: scope
97
94
  end
98
95
 
96
+ if credentials.respond_to? :project_id
97
+ project_id ||= credentials.project_id
98
+ end
99
+ project_id = project_id.to_s # Always cast to a string
100
+ raise ArgumentError, "project_id is missing" if project_id.empty?
101
+
99
102
  Logging::Project.new(
100
103
  Logging::Service.new(
101
- project_id, credentials, timeout: timeout,
104
+ project_id, credentials, timeout: timeout,
102
105
  client_config: client_config
103
106
  )
104
107
  )
@@ -136,6 +139,17 @@ module Google
136
139
  # * `labels` - (Hash) User defined labels. A `Hash` of label names to
137
140
  # string label values or callables/`Proc` which are functions of the
138
141
  # Rack environment.
142
+ # * `set_default_logger_on_rails_init` - (Boolean) Whether Google Cloud
143
+ # Logging Logger should be allowed to start background threads and open
144
+ # gRPC connections during Rails initialization. This should only be used
145
+ # with a non-forking web server. Web servers such as Puma and Unicorn
146
+ # should not set this, and instead set the Rails logger to a Google
147
+ # Cloud Logging Logger object on the worker process by calling
148
+ # {Railtie.set_default_logger} at the appropriate time, such as a
149
+ # post-fork hook.
150
+ # * `on_error` - (Proc) A Proc to be run when an error is encountered
151
+ # on a background thread. The Proc must take the error object as the
152
+ # single argument. (See {AsyncWriter.on_error}.)
139
153
  #
140
154
  # See the [Configuration
141
155
  # Guide](https://googleapis.github.io/google-cloud-ruby/docs/stackdriver/latest/file.INSTRUMENTATION_CONFIGURATION)
@@ -13,8 +13,9 @@
13
13
  # limitations under the License.
14
14
 
15
15
 
16
- require "set"
17
- require "stackdriver/core/async_actor"
16
+ require "monitor"
17
+ require "concurrent"
18
+ require "google/cloud/logging/errors"
18
19
 
19
20
  module Google
20
21
  module Cloud
@@ -22,12 +23,13 @@ module Google
22
23
  ##
23
24
  # # AsyncWriter
24
25
  #
25
- # An object that batches and transmits log entries asynchronously.
26
+ # AsyncWriter buffers, batches, and transmits log entries efficiently.
27
+ # Writing log entries is asynchronous and will not block.
26
28
  #
27
- # Use this object to transmit log entries efficiently. It keeps a queue
28
- # of log entries, and runs a background thread that transmits them to
29
- # the logging service in batches. Generally, adding to the queue will
30
- # not block.
29
+ # Batches that cannot be delivered immediately are queued. When the queue
30
+ # is full new batch requests will raise errors that can be consumed using
31
+ # the {#on_error} callback. This provides back pressure in case the writer
32
+ # cannot keep up with requests.
31
33
  #
32
34
  # This object is thread-safe; it may accept write requests from
33
35
  # multiple threads simultaneously, and will serialize them when
@@ -54,58 +56,37 @@ module Google
54
56
  # labels: labels
55
57
  #
56
58
  class AsyncWriter
57
- include Stackdriver::Core::AsyncActor
58
-
59
- DEFAULT_MAX_QUEUE_SIZE = 10000
60
- CLEANUP_TIMEOUT = Stackdriver::Core::AsyncActor::CLEANUP_TIMEOUT
61
- WAIT_INTERVAL = Stackdriver::Core::AsyncActor::WAIT_INTERVAL
59
+ include MonitorMixin
62
60
 
63
61
  ##
64
- # @private Item in the log entries queue.
65
- QueueItem = Struct.new(:entries, :log_name, :resource, :labels) do
66
- def try_combine next_item
67
- if log_name == next_item.log_name &&
68
- resource == next_item.resource &&
69
- labels == next_item.labels
70
- entries.concat(next_item.entries)
71
- true
72
- else
73
- false
74
- end
75
- end
76
- end
62
+ # @private Implementation accessors
63
+ attr_reader :logging, :max_bytes, :max_count, :interval,
64
+ :threads, :max_queue, :partial_success
77
65
 
78
66
  ##
79
- # @private The logging object.
80
- attr_accessor :logging
67
+ # @private Creates a new AsyncWriter instance.
68
+ def initialize logging, max_count: 10000, max_bytes: 10000000,
69
+ max_queue: 100, interval: 5, threads: 10,
70
+ partial_success: false
71
+ @logging = logging
81
72
 
82
- ##
83
- # @private The maximum size of the entries queue, or nil if not set.
84
- attr_accessor :max_queue_size
73
+ @max_count = max_count
74
+ @max_bytes = max_bytes
75
+ @max_queue = max_queue
76
+ @interval = interval
77
+ @threads = threads
85
78
 
86
- ##
87
- # The current state. Either :running, :suspended, :stopping, or :stopped
88
- #
89
- # DEPRECATED. Use #async_state instead.
90
- alias state async_state
79
+ @partial_success = partial_success
91
80
 
92
- ##
93
- # The last exception thrown by the background thread, or nil if nothing
94
- # has been thrown.
95
- attr_reader :last_exception
81
+ @error_callbacks = []
96
82
 
97
- ##
98
- # @private Creates a new AsyncWriter instance.
99
- def initialize logging, max_queue_size = DEFAULT_MAX_QUEUE_SIZE,
100
- partial_success = false
101
- super()
83
+ @cond = new_cond
102
84
 
103
- @logging = logging
104
- @max_queue_size = max_queue_size
105
- @partial_success = partial_success
106
- @queue_resource = new_cond
107
- @queue = []
108
- @queue_size = 0
85
+ # Make sure all buffered messages are sent when process exits.
86
+ at_exit { stop }
87
+
88
+ # init MonitorMixin
89
+ super()
109
90
  end
110
91
 
111
92
  ##
@@ -149,19 +130,35 @@ module Google
149
130
  # async.write_entries entry
150
131
  #
151
132
  def write_entries entries, log_name: nil, resource: nil, labels: nil
152
- ensure_thread
153
- entries = Array(entries)
154
133
  synchronize do
155
- raise "AsyncWriter has been stopped" unless writable?
156
- queue_item = QueueItem.new entries, log_name, resource, labels
157
- if @queue.empty? || !@queue.last.try_combine(queue_item)
158
- @queue.push queue_item
159
- end
160
- @queue_size += entries.size
161
- @queue_resource.broadcast
162
- while @max_queue_size && @queue_size > @max_queue_size
163
- @queue_resource.wait
134
+ raise "AsyncWriter has been stopped" if @stopped
135
+
136
+ Array(entries).each do |entry|
137
+ # Update the entry to have all the data directly on it
138
+ entry.log_name ||= log_name
139
+ if entry.resource.nil? || entry.resource.empty?
140
+ entry.resource = resource
141
+ end
142
+ entry.labels = labels if entry.labels.nil? || entry.labels.empty?
143
+
144
+ # Add the entry to the batch
145
+ @batch ||= Batch.new self
146
+ next if @batch.try_add entry
147
+
148
+ # If we can't add to the batch, publish and create a new batch
149
+ publish_batch!
150
+ @batch = Batch.new self
151
+ @batch.add entry
164
152
  end
153
+
154
+ @thread_pool ||= \
155
+ Concurrent::CachedThreadPool.new max_threads: @threads,
156
+ max_queue: @max_queue
157
+ @thread ||= Thread.new { run_background }
158
+
159
+ publish_batch! if @batch.ready?
160
+
161
+ @cond.broadcast
165
162
  end
166
163
  self
167
164
  end
@@ -201,182 +198,306 @@ module Google
201
198
  end
202
199
 
203
200
  ##
204
- # Stops this asynchronous writer.
205
- #
206
- # After this call succeeds, the state will change to :stopping, and
207
- # you may not issue any additional write_entries calls. Any previously
208
- # issued writes will complete. Once any existing backlog has been
209
- # cleared, the state will change to :stopped.
210
- #
211
- # DEPRECATED. Use #async_stop instead.
201
+ # Begins the process of stopping the writer. Entries already in the
202
+ # queue will be published, but no new entries can be added. Use {#wait!}
203
+ # to block until the writer is fully stopped and all pending entries
204
+ # have been published.
212
205
  #
213
- # @return [Boolean] Returns true if the writer was running, or false
214
- # if the writer had already been stopped.
215
- #
216
- alias stop async_stop
206
+ # @return [AsyncWriter] returns self so calls can be chained.
207
+ def stop
208
+ synchronize do
209
+ break if @stopped
210
+
211
+ @stopped = true
212
+ publish_batch!
213
+ @cond.broadcast
214
+ @thread_pool.shutdown if @thread_pool
215
+ end
216
+
217
+ self
218
+ end
217
219
 
218
220
  ##
219
- # Suspends this asynchronous writer.
220
- #
221
- # After this call succeeds, the state will change to :suspended, and
222
- # the writer will stop sending RPCs until resumed.
223
- #
224
- # DEPRECATED. Use #async_suspend instead.
221
+ # Blocks until the writer is fully stopped, all pending entries have
222
+ # been published, and all callbacks have completed. Does not stop the
223
+ # writer. To stop the writer, first call {#stop} and then call {#wait!}
224
+ # to block until the writer is stopped.
225
225
  #
226
- # @return [Boolean] Returns true if the writer had been running and was
227
- # suspended, otherwise false.
228
- #
229
- alias suspend async_suspend
226
+ # @return [AsyncWriter] returns self so calls can be chained.
227
+ def wait! timeout = nil
228
+ synchronize do
229
+ if @thread_pool
230
+ @thread_pool.shutdown
231
+ @thread_pool.wait_for_termination timeout
232
+ end
233
+ end
234
+
235
+ self
236
+ end
230
237
 
231
238
  ##
232
- # Resumes this suspended asynchronous writer.
233
- #
234
- # After this call succeeds, the state will change to :running, and
235
- # the writer will resume sending RPCs.
239
+ # Stop this asynchronous writer and block until it has been stopped.
236
240
  #
237
- # DEPRECATED. Use #async_resume instead.
241
+ # @param [Number] timeout Timeout in seconds.
242
+ # @param [Boolean] force If set to true, and the writer hasn't stopped
243
+ # within the given timeout, kill it forcibly by terminating the
244
+ # thread. This should be used with extreme caution, as it can
245
+ # leave RPCs unfinished. Default is false.
238
246
  #
239
- # @return [Boolean] Returns true if the writer had been suspended and
240
- # is now running, otherwise false.
247
+ # @return [Symbol] Returns `:stopped` if the AsyncWriter was already
248
+ # stopped at the time of invocation, `:waited` if it stopped
249
+ # during the timeout period, `:timeout` if it is still running
250
+ # after the timeout, or `:forced` if it was forcibly killed.
241
251
  #
242
- alias resume async_resume
252
+ def stop! timeout = nil, force: nil
253
+ return :stopped if stopped?
254
+
255
+ stop
256
+ wait! timeout
257
+
258
+ if synchronize { @thread_pool.shutdown? }
259
+ return :waited if timeout
260
+ elsif force
261
+ @thread_pool.kill
262
+ return :forced
263
+ end
264
+ :timeout
265
+ end
266
+ alias async_stop! stop!
243
267
 
244
268
  ##
245
- # Returns true if this writer is running.
269
+ # Forces all entries in the current batch to be published
270
+ # immediately.
246
271
  #
247
- # DEPRECATED. Use #async_running? instead.
272
+ # @return [AsyncWriter] returns self so calls can be chained.
273
+ def flush
274
+ synchronize do
275
+ publish_batch!
276
+ @cond.broadcast
277
+ end
278
+
279
+ self
280
+ end
281
+
282
+ ##
283
+ # Whether the writer has been started.
248
284
  #
249
- # @return [Boolean] Returns true if the writer is currently running.
285
+ # @return [boolean] `true` when started, `false` otherwise.
286
+ def started?
287
+ !stopped?
288
+ end
289
+
290
+ ##
291
+ # Whether the writer has been stopped.
250
292
  #
251
- alias running? async_running?
293
+ # @return [boolean] `true` when stopped, `false` otherwise.
294
+ def stopped?
295
+ synchronize { @stopped }
296
+ end
252
297
 
253
298
  ##
254
- # Returns true if this writer is suspended.
299
+ # Register to be notified of errors when raised.
255
300
  #
256
- # DEPRECATED. Use #async_suspended? instead.
301
+ # If an unhandled error has occurred the writer will attempt to
302
+ # recover from the error and resume buffering, batching, and
303
+ # transmitting log entries
257
304
  #
258
- # @return [Boolean] Returns true if the writer is currently suspended.
305
+ # Multiple error handlers can be added.
259
306
  #
260
- alias suspended? async_suspended?
261
-
262
- ##
263
- # Returns true if this writer is still accepting writes. This means
264
- # it is either running or suspended.
307
+ # @yield [callback] The block to be called when an error is raised.
308
+ # @yieldparam [Exception] error The error raised.
265
309
  #
266
- # DEPRECATED. Use #async_working? instead.
310
+ # @example
311
+ # require "google/cloud/logging"
312
+ # require "google/cloud/error_reporting"
267
313
  #
268
- # @return [Boolean] Returns true if the writer is accepting writes.
314
+ # logging = Google::Cloud::Logging.new
269
315
  #
270
- alias writable? async_working?
271
-
272
- ##
273
- # Returns true if this writer is fully stopped.
316
+ # resource = logging.resource "gae_app",
317
+ # module_id: "1",
318
+ # version_id: "20150925t173233"
319
+ #
320
+ # async = logging.async_writer
274
321
  #
275
- # DEPRECATED. Use #async_stopped? instead.
322
+ # # Register to be notified when unhandled errors occur.
323
+ # async.on_error do |error|
324
+ # # error can be a AsyncWriterError or AsyncWriteEntriesError
325
+ # Google::Cloud::ErrorReporting.report error
326
+ # end
276
327
  #
277
- # @return [Boolean] Returns true if the writer is fully stopped.
328
+ # logger = async.logger "my_app_log", resource, env: :production
329
+ # logger.info "Job started."
278
330
  #
279
- alias stopped? async_stopped?
331
+ def on_error &block
332
+ synchronize do
333
+ @error_callbacks << block
334
+ end
335
+ end
280
336
 
281
337
  ##
282
- # Blocks until this asynchronous writer has been stopped, or the given
283
- # timeout (if present) has elapsed.
338
+ # The most recent unhandled error to occur while transmitting log
339
+ # entries.
284
340
  #
285
- # DEPRECATED. Use #wait_until_async_stopped instead.
341
+ # If an unhandled error has occurred the subscriber will attempt to
342
+ # recover from the error and resume buffering, batching, and
343
+ # transmitting log entries.
286
344
  #
287
- # @param [Number, nil] timeout Timeout in seconds, or `nil` for no
288
- # timeout.
345
+ # @return [Exception, nil] error The most recent error raised.
289
346
  #
290
- # @return [Boolean] Returns true if the writer is stopped, or false
291
- # if the timeout expired.
347
+ # @example
348
+ # require "google/cloud/logging"
292
349
  #
293
- alias wait_until_stopped wait_until_async_stopped
294
-
295
- ##
296
- # Stop this asynchronous writer and block until it has been stopped.
350
+ # logging = Google::Cloud::Logging.new
297
351
  #
298
- # DEPRECATED. Use #async_stop! instead.
352
+ # resource = logging.resource "gae_app",
353
+ # module_id: "1",
354
+ # version_id: "20150925t173233"
299
355
  #
300
- # @param [Number] timeout Timeout in seconds.
301
- # @param [Boolean] force If set to true, and the writer hasn't stopped
302
- # within the given timeout, kill it forcibly by terminating the
303
- # thread. This should be used with extreme caution, as it can
304
- # leave RPCs unfinished. Default is false.
356
+ # async = logging.async_writer
305
357
  #
306
- # @return [Symbol] Returns `:stopped` if the AsyncWriter was already
307
- # stopped at the time of invocation, `:waited` if it stopped
308
- # during the timeout period, `:timeout` if it is still running
309
- # after the timeout, or `:forced` if it was forcibly killed.
358
+ # logger = async.logger "my_app_log", resource, env: :production
359
+ # logger.info "Job started."
310
360
  #
311
- def stop! timeout, force: false
312
- @cleanup_options[:timeout] = timeout unless timeout.nil?
313
- @cleanup_options[:force] = force unless force.nil?
314
-
315
- async_stop!
361
+ # # If an error was raised, it can be retrieved here:
362
+ # async.last_error #=> nil
363
+ #
364
+ def last_error
365
+ synchronize { @last_error }
316
366
  end
367
+ alias last_exception last_error
317
368
 
318
- ##
319
- # @private Callback function when the async actor thread state changes
320
- def on_async_state_change
369
+ protected
370
+
371
+ def run_background
321
372
  synchronize do
322
- @queue_resource.broadcast
373
+ until @stopped
374
+ if @batch.nil?
375
+ @cond.wait
376
+ next
377
+ end
378
+
379
+ if @batch.ready?
380
+ # interval met, publish the batch...
381
+ publish_batch!
382
+ @cond.wait
383
+ else
384
+ # still waiting for the interval to publish the batch...
385
+ @cond.wait(@batch.publish_wait)
386
+ end
387
+ end
323
388
  end
324
389
  end
325
390
 
326
- protected
391
+ def publish_batch!
392
+ return unless @batch
327
393
 
328
- ##
329
- # @private The background thread implementation, which continuously
330
- # waits for and performs work, and returns only when fully stopped.
331
- #
332
- def run_backgrounder
333
- queue_item = wait_next_item
334
- return unless queue_item
335
- begin
336
- logging.write_entries(
337
- queue_item.entries,
338
- log_name: queue_item.log_name,
339
- resource: queue_item.resource,
340
- labels: queue_item.labels,
341
- partial_success: @partial_success
342
- )
343
- rescue StandardError => e
344
- # Ignore any exceptions thrown from the background thread, but
345
- # keep running to ensure its state behavior remains consistent.
346
- @last_exception = e
394
+ batch_to_be_published = @batch
395
+ @batch = nil
396
+ publish_batch_async batch_to_be_published
397
+ end
398
+
399
+ # Sets the last_error and calls all error callbacks.
400
+ def error! error
401
+ error_callbacks = synchronize do
402
+ @last_error = error
403
+ @error_callbacks
347
404
  end
405
+ error_callbacks = default_error_callbacks if error_callbacks.empty?
406
+ error_callbacks.each { |error_callback| error_callback.call error }
348
407
  end
349
408
 
350
- ##
351
- # @private Wait for and dequeue the next set of log entries to transmit.
352
- #
353
- # @return [QueueItem, nil] Returns the next set of entries. If
354
- # the writer has been stopped and no more entries are left in the
355
- # queue, returns `nil`.
356
- #
357
- def wait_next_item
358
- synchronize do
359
- while state == :suspended ||
360
- (state == :running && @queue.empty?)
361
- @queue_resource.wait
362
- end
363
- queue_item = nil
364
- unless @queue.empty?
365
- queue_item = @queue.shift
366
- @queue_size -= queue_item.entries.size
409
+ def default_error_callbacks
410
+ # This is memoized to reduce calls to the configuration.
411
+ @default_error_callbacks ||= begin
412
+ error_callback = Google::Cloud::Logging.configuration.on_error
413
+ error_callback ||= Google::Cloud.configure.on_error
414
+ if error_callback
415
+ [error_callback]
416
+ else
417
+ []
367
418
  end
368
- @queue_resource.broadcast
369
- queue_item
370
419
  end
371
420
  end
372
421
 
422
+ def publish_batch_async batch
423
+ Concurrent::Promises.future_on(
424
+ @thread_pool, batch.entries
425
+ ) do |entries|
426
+ write_entries_with entries
427
+ end
428
+ rescue Concurrent::RejectedExecutionError => e
429
+ async_error = AsyncWriterError.new(
430
+ "Error writing entries: #{e.message}",
431
+ batch.entries
432
+ )
433
+ # Manually set backtrace so we don't have to raise
434
+ async_error.set_backtrace caller
435
+ error! async_error
436
+ end
437
+
438
+ def write_entries_with entries
439
+ logging.write_entries entries, partial_success: partial_success
440
+ rescue StandardError => e
441
+ write_error = AsyncWriteEntriesError.new(
442
+ "Error writing entries: #{e.message}",
443
+ entries
444
+ )
445
+ # Manually set backtrace so we don't have to raise
446
+ write_error.set_backtrace caller
447
+ error! write_error
448
+ end
449
+
373
450
  ##
374
- # @private Override the #backgrounder_stoppable? method from AsyncActor
375
- # module. The actor can be gracefully stopped when queue is
376
- # empty.
377
- def backgrounder_stoppable?
378
- synchronize do
379
- @queue.empty?
451
+ # @private
452
+ class Batch
453
+ attr_reader :created_at, :entries
454
+
455
+ def initialize writer
456
+ @writer = writer
457
+ @entries = []
458
+ @entries_bytes = 2 # initial size w/ partial_success
459
+ @created_at = nil
460
+ end
461
+
462
+ def add entry, addl_bytes: nil
463
+ addl_bytes ||= addl_bytes_for entry
464
+ @entries << entry
465
+ @entries_bytes += addl_bytes
466
+ @created_at ||= Time.now
467
+ nil
468
+ end
469
+
470
+ def try_add entry
471
+ addl_bytes = addl_bytes_for entry
472
+ new_message_count = @entries.count + 1
473
+ new_message_bytes = @entries_bytes + addl_bytes
474
+ if new_message_count > @writer.max_count ||
475
+ new_message_bytes >= @writer.max_bytes
476
+ return false
477
+ end
478
+ add entry, addl_bytes: addl_bytes
479
+ true
480
+ end
481
+
482
+ def ready?
483
+ @entries.count >= @writer.max_count ||
484
+ @entries_bytes >= @writer.max_bytes ||
485
+ (@created_at.nil? || (publish_at < Time.now))
486
+ end
487
+
488
+ def publish_at
489
+ return nil if @created_at.nil?
490
+ @created_at + @writer.interval
491
+ end
492
+
493
+ def publish_wait
494
+ publish_wait = publish_at - Time.now
495
+ return 0 if publish_wait < 0
496
+ publish_wait
497
+ end
498
+
499
+ def addl_bytes_for entry
500
+ entry.to_grpc.to_proto.bytesize + 2
380
501
  end
381
502
  end
382
503
  end