hutch-schedule 0.7.0 → 0.7.2
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/README.md +78 -4
- data/lib/hutch/enqueue.rb +22 -0
- data/lib/hutch/patch/config.rb +9 -4
- data/lib/hutch/patch/worker.rb +12 -6
- data/lib/hutch/schedule.rb +19 -2
- data/lib/hutch/schedule/version.rb +1 -1
- data/lib/hutch/threshold.rb +20 -13
- metadata +2 -2
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 51507037110c8e884ac519fe99c27f8547da70f5ada9744bc034a7ec10da5ee3
|
4
|
+
data.tar.gz: 45f2fcb4e686f86a13bcded021c661b69227c8eca5f5b0b96764a9ac840a8372
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 1c507eead9eef80897516c1296cf3eaa3df4674fbba1b188138ce06317a332cfe886291e596a403f5fa3573079f4b9650a0206252e5567b451bda54dadeb07e4
|
7
|
+
data.tar.gz: 2f38719bc7da6b3ea9a7826a5e82555336b81b46acb62622af6d33f7ab1f52679af6f77d7e048274168dcb1166a10eb5ead8fe29fe1ffa5200a4022022444e18
|
data/README.md
CHANGED
@@ -1,9 +1,26 @@
|
|
1
|
-
# Hutch
|
1
|
+
# Hutch Schedule
|
2
2
|
|
3
3
|
Add the schedule message function to [Hutch](https://github.com/gocardless/hutch).
|
4
4
|
|
5
5
|
See [hutch-schedule-demo](https://github.com/wppurking/hutch-schedule-demo) how to integration with rails.
|
6
6
|
|
7
|
+
## Contents
|
8
|
+
|
9
|
+
- [Hutch Schedule](#hutch-schedule)
|
10
|
+
- [Contents](#contents)
|
11
|
+
- [Installation](#installation)
|
12
|
+
- [Usage](#usage)
|
13
|
+
- [Hutch::Enqueue](#hutchenqueue)
|
14
|
+
- [Hutch::Threshold](#hutchthreshold)
|
15
|
+
- [Error Retry](#error-retry)
|
16
|
+
- [Rails](#rails)
|
17
|
+
- [Work with Hutch it`s self](#work-with-hutch-its-self)
|
18
|
+
- [Work with ActiveJob](#work-with-activejob)
|
19
|
+
- [Limits](#limits)
|
20
|
+
- [Contributing](#contributing)
|
21
|
+
- [License](#license)
|
22
|
+
- [TODO](#todo)
|
23
|
+
|
7
24
|
## Installation
|
8
25
|
|
9
26
|
Add this line to your application's Gemfile:
|
@@ -36,7 +53,17 @@ They will do something below:
|
|
36
53
|
- Set `x-message-ttl: <30.days>`: to avoid the queue is to large, because there is no consumer with this queue.
|
37
54
|
3. If ActiveJob is loaded. it will use `ActiveJob::Base.descendants` to register all ActiveJob class to one-job-per-consumer to Hutch::Consumer
|
38
55
|
|
39
|
-
|
56
|
+
## Configurations
|
57
|
+
|
58
|
+
Name | Default Value | Description
|
59
|
+
-----|---------------|-------------
|
60
|
+
worker_pool_size | 20 | Monkey patch the `Hutch::Worker` set the FixedThreadPool thread size(not the bunney ConsumerWorkPool size)
|
61
|
+
poller_interval| 1 | seconds of the poller to trigger, poller the message in BufferQueue submit to FixedThreadPool
|
62
|
+
poller_batch_size | 100 | the message size of every batch triggerd by the poller
|
63
|
+
redis_url | redis://127.0.0.1:6379/0 | Redis backend url for Ratelimit and Unique Job
|
64
|
+
ratelimit_bucket_interval | 1 | Ratelimit use the time bucket (seconds) to store the counts, lower the more accurate
|
65
|
+
|
66
|
+
## Hutch::Enqueue
|
40
67
|
Let consumer to include `Hutch::Enqueue` then it has the ability of publishing message to RabbitMQ with the `consume '<routing_key>'`
|
41
68
|
|
42
69
|
* enqueue: just publish one message
|
@@ -53,7 +80,42 @@ We design the fixed delay level from seconds to hours, below is the details:
|
|
53
80
|
RabbitMQ is not fit for storage lot`s of delay message so if you want delay an message beyand 3 hours so you need to storage it
|
54
81
|
into database or some place.
|
55
82
|
|
56
|
-
|
83
|
+
## Hutch::Threshold
|
84
|
+
Let consumer to include `Hutch::Threshold` to get the ability of threshold the message consume. It's automatic included when
|
85
|
+
consumer include `Hutch::Enqueue`.
|
86
|
+
|
87
|
+
1. static configuration
|
88
|
+
```ruby
|
89
|
+
class PlanConsumer
|
90
|
+
include Hutch::Consumer
|
91
|
+
include Hutch::Enqueue
|
92
|
+
|
93
|
+
attempts 3
|
94
|
+
consume 'abc.plan'
|
95
|
+
# threshold 3 jobs per second
|
96
|
+
threshold rate: 3, interval: 1
|
97
|
+
end
|
98
|
+
```
|
99
|
+
|
100
|
+
2. dynamic lambada
|
101
|
+
```ruby
|
102
|
+
class PlanConsumer
|
103
|
+
include Hutch::Consumer
|
104
|
+
include Hutch::Enqueue
|
105
|
+
|
106
|
+
attempts 3
|
107
|
+
consume 'abc.plan'
|
108
|
+
# threshold 2 jobs every 2 second with get_report context
|
109
|
+
threshold -> { { context: 'get_report', rate: 2, interval: 2 } }
|
110
|
+
end
|
111
|
+
```
|
112
|
+
|
113
|
+
threshold lambada need get return value must be a Hash and include:
|
114
|
+
* context: the limit context with currency threshold
|
115
|
+
* rate: the rate speed of threshold
|
116
|
+
* interval: the time range of threshold
|
117
|
+
|
118
|
+
## Error Retry
|
57
119
|
If you want use error retry, then:
|
58
120
|
|
59
121
|
1. Add `Hutch::ErrorHandlers::MaxRetry` to `Hutch::Config.error_handlers` like below
|
@@ -96,6 +158,12 @@ PlanConsumer.enqueue_in(5.seconds, a: 1)
|
|
96
158
|
```
|
97
159
|
|
98
160
|
### Work with ActiveJob
|
161
|
+
Config rails to use `:hutch` active_job adapter:
|
162
|
+
```
|
163
|
+
config.active_job.queue_adapter = :hutch
|
164
|
+
config.active_job.queue_name_prefix = 'ajd'
|
165
|
+
```
|
166
|
+
|
99
167
|
```ruby
|
100
168
|
class EmailJob < ApplicationJob
|
101
169
|
queue_as :email
|
@@ -114,6 +182,13 @@ EmailJob.perform_later(user.id)
|
|
114
182
|
EmailJob.set(wait: 5.seconds).perform_later(user.id)
|
115
183
|
```
|
116
184
|
|
185
|
+
## Limits
|
186
|
+
Because we use monkey patch to add some new features so if you direct use hutch cli and don't have some place to require
|
187
|
+
hutch-schedule gem it won't be function.
|
188
|
+
|
189
|
+
* When direct use hutch cli only have one hook for rails app to load external depencency so if the app is not an rails app right now will not working.
|
190
|
+
* Hutch Config files must load during rails app initlization because monkey patch `Hutch::Config` need to load and then to work
|
191
|
+
|
117
192
|
## Contributing
|
118
193
|
|
119
194
|
Bug reports and pull requests are welcome on GitHub at https://github.com/wppurking/hutch-schedule. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the [Contributor Covenant](http://contributor-covenant.org) code of conduct.
|
@@ -128,4 +203,3 @@ Use the repo: https://github.com/wppurking/hutch-schedule-demo
|
|
128
203
|
|
129
204
|
# TODO
|
130
205
|
* add cron job support
|
131
|
-
* add unique job support
|
data/lib/hutch/enqueue.rb
CHANGED
@@ -16,6 +16,12 @@ module Hutch
|
|
16
16
|
Hutch.publish(enqueue_routing_key, message)
|
17
17
|
end
|
18
18
|
|
19
|
+
# enqueue unique message
|
20
|
+
def enqueue_uniq(uniq_key, message)
|
21
|
+
return false unless uniq_key_check(uniq_key)
|
22
|
+
enqueue(message)
|
23
|
+
end
|
24
|
+
|
19
25
|
# publish message at a delay times
|
20
26
|
# interval: delay interval seconds
|
21
27
|
# message: publish message
|
@@ -34,6 +40,11 @@ module Hutch
|
|
34
40
|
Hutch::Schedule.publish(delay_routing_key, message, properties)
|
35
41
|
end
|
36
42
|
|
43
|
+
def enqueue_uniq_in(uniq_key, interval, message, props = {})
|
44
|
+
return false unless uniq_key_check(uniq_key)
|
45
|
+
enqueue_in(interval, message, props)
|
46
|
+
end
|
47
|
+
|
37
48
|
# delay at exatly time point
|
38
49
|
def enqueue_at(time, message, props = {})
|
39
50
|
# compatible with with ActiveJob API
|
@@ -43,6 +54,17 @@ module Hutch
|
|
43
54
|
enqueue_in(interval, message, props)
|
44
55
|
end
|
45
56
|
|
57
|
+
def enqueue_uniq_at(uniq_key, time, message, props = {})
|
58
|
+
return false unless uniq_key_check(uniq_key)
|
59
|
+
enqueue_at(time, message, props)
|
60
|
+
end
|
61
|
+
|
62
|
+
# check uniq_key is set or not
|
63
|
+
# expire time set for 24h
|
64
|
+
def uniq_key_check(uniq_key)
|
65
|
+
Hutch::Schedule.ns.set(uniq_key, "1", ex: 86400, nx: true)
|
66
|
+
end
|
67
|
+
|
46
68
|
# routing_key: the purpose is to send message to hutch exchange and then routing to the correct queue,
|
47
69
|
# so can use any of them routing_key that the consumer is consuming.
|
48
70
|
def enqueue_routing_key
|
data/lib/hutch/patch/config.rb
CHANGED
@@ -10,19 +10,24 @@ module Hutch
|
|
10
10
|
# Hutch Schedule poller batch size
|
11
11
|
number_setting :poller_batch_size, 100
|
12
12
|
|
13
|
-
#
|
14
|
-
string_setting :
|
13
|
+
# Redis url for ratelimit and unique job
|
14
|
+
string_setting :redis_url, "redis://127.0.0.1:6379/0"
|
15
15
|
|
16
16
|
# Ratelimit bucket interval
|
17
17
|
number_setting :ratelimit_bucket_interval, 1
|
18
18
|
|
19
|
+
# Ratelimit redis backend reconnect attempts
|
20
|
+
number_setting :ratelimit_redis_reconnect_attempts, 10
|
21
|
+
|
19
22
|
initialize(
|
20
23
|
worker_pool_size: 20,
|
21
24
|
poller_interval: 1,
|
22
25
|
poller_batch_size: 100,
|
23
26
|
# @see Redis::Client
|
24
|
-
|
25
|
-
ratelimit_bucket_interval:
|
27
|
+
redis_url: "redis://127.0.0.1:6379/0",
|
28
|
+
ratelimit_bucket_interval: 1,
|
29
|
+
ratelimit_redis_reconnect_attempts: 10
|
26
30
|
)
|
31
|
+
define_methods
|
27
32
|
end
|
28
33
|
end
|
data/lib/hutch/patch/worker.rb
CHANGED
@@ -19,15 +19,21 @@ module Hutch
|
|
19
19
|
heartbeat_connection
|
20
20
|
retry_buffer_queue
|
21
21
|
end
|
22
|
-
|
23
|
-
|
24
|
-
|
22
|
+
|
23
|
+
# The queue size maybe the same as channel[prefetch] and every Consumer have it's own buffer queue with the same prefetch size,
|
24
|
+
# when the buffer queue have the prefetch size message rabbitmq will stop push message to this consumer but it's ok.
|
25
|
+
# The consumer will threshold by the shared redis instace.
|
26
|
+
@buffer_queue = ::Queue.new
|
27
|
+
@batch_size = Hutch::Config.get(:poller_batch_size)
|
28
|
+
@connected = Hutch.connected?
|
25
29
|
end
|
26
30
|
|
27
|
-
#
|
28
|
-
|
29
|
-
|
31
|
+
# Stop a running worker by killing all subscriber threads.
|
32
|
+
# Stop two thread pool
|
33
|
+
def stop
|
30
34
|
@timer_worker.shutdown
|
35
|
+
@message_worker.shutdown
|
36
|
+
@broker.stop
|
31
37
|
end
|
32
38
|
|
33
39
|
# Bind a consumer's routing keys to its queue, and set up a subscription to
|
data/lib/hutch/schedule.rb
CHANGED
@@ -30,7 +30,7 @@ module Hutch
|
|
30
30
|
ActiveJob::QueueAdapters::HutchAdapter.register_actice_job_classes if defined?(ActiveJob::QueueAdapters::HutchAdapter)
|
31
31
|
|
32
32
|
return if core.present?
|
33
|
-
Hutch.connect
|
33
|
+
Hutch.connect
|
34
34
|
@core = Hutch::Schedule::Core.new(Hutch.broker)
|
35
35
|
@core.connect!
|
36
36
|
end
|
@@ -40,11 +40,28 @@ module Hutch
|
|
40
40
|
@core = nil
|
41
41
|
end
|
42
42
|
|
43
|
-
|
44
43
|
def core
|
45
44
|
@core
|
46
45
|
end
|
47
46
|
|
47
|
+
# redis with namespace
|
48
|
+
def ns
|
49
|
+
@redis ||= Redis::Namespace.new(:hutch, redis: Redis.new(
|
50
|
+
url: Hutch::Config.get(:redis_url),
|
51
|
+
# https://github.com/redis/redis-rb#reconnections
|
52
|
+
# retry 10 times total cost 10 * 30 = 300s
|
53
|
+
reconnect_attempts: Hutch::Config.get(:ratelimit_redis_reconnect_attempts),
|
54
|
+
:reconnect_delay => 3,
|
55
|
+
:reconnect_delay_max => 30.0,
|
56
|
+
))
|
57
|
+
end
|
58
|
+
|
59
|
+
# all Consumers that use threshold module shared the same redis instance
|
60
|
+
def redis
|
61
|
+
ns.redis
|
62
|
+
end
|
63
|
+
|
64
|
+
|
48
65
|
def publish(*args)
|
49
66
|
core.publish(*args)
|
50
67
|
end
|
data/lib/hutch/threshold.rb
CHANGED
@@ -1,4 +1,7 @@
|
|
1
1
|
require 'active_support/concern'
|
2
|
+
require 'active_support/core_ext/object/blank'
|
3
|
+
require 'active_support/core_ext/module/attribute_accessors'
|
4
|
+
require 'hutch/schedule'
|
2
5
|
require 'ratelimit'
|
3
6
|
|
4
7
|
module Hutch
|
@@ -8,6 +11,10 @@ module Hutch
|
|
8
11
|
|
9
12
|
# Add Consumer methods
|
10
13
|
class_methods do
|
14
|
+
mattr_accessor :default_context, default: "default"
|
15
|
+
mattr_accessor :default_rate, default: 10
|
16
|
+
mattr_accessor :default_interval, default: 1
|
17
|
+
|
11
18
|
# 限流速度:
|
12
19
|
# context: 可选的上下文, 默认为 default
|
13
20
|
# rate: 数量
|
@@ -29,17 +36,17 @@ module Hutch
|
|
29
36
|
raise "need args or block" if args.blank?
|
30
37
|
raise "args need hash type" if args.class != Hash
|
31
38
|
args.symbolize_keys!
|
32
|
-
@context = args[:context].presence ||
|
33
|
-
@rate = args[:rate]
|
34
|
-
@interval = args[:interval]
|
39
|
+
@context = args[:context].presence || default_context
|
40
|
+
@rate = args[:rate].presence || default_rate
|
41
|
+
@interval = args[:interval].presence || default_interval
|
35
42
|
end
|
36
43
|
# call redis.ping let fail fast if redis is not avalible
|
37
|
-
|
44
|
+
Hutch::Schedule.redis.ping
|
38
45
|
# redis: 传入设置的 redis
|
39
46
|
# bucket_interval: 记录的间隔, 越小精度越大
|
40
47
|
@rate_limiter = Ratelimit.new(self.name,
|
41
48
|
bucket_interval: Hutch::Config.get(:ratelimit_bucket_interval),
|
42
|
-
redis:
|
49
|
+
redis: Hutch::Schedule.redis)
|
43
50
|
end
|
44
51
|
|
45
52
|
# is class level @rate_limiter _context exceeded?
|
@@ -47,29 +54,29 @@ module Hutch
|
|
47
54
|
def ratelimit_exceeded?
|
48
55
|
return false if @rate_limiter.blank?
|
49
56
|
@rate_limiter.exceeded?(_context, threshold: _rate, interval: _interval)
|
57
|
+
rescue Redis::BaseError
|
58
|
+
# when redis cann't connect return exceeded limit
|
59
|
+
true
|
50
60
|
end
|
51
61
|
|
52
62
|
# 增加一次调用
|
53
63
|
def ratelimit_add
|
54
64
|
return if @rate_limiter.blank?
|
55
65
|
@rate_limiter.add(_context)
|
66
|
+
rescue Redis::BaseError
|
67
|
+
nil
|
56
68
|
end
|
57
69
|
|
58
70
|
def _context
|
59
|
-
@block_given ? @threshold_block.call[:context] : @context
|
71
|
+
@block_given ? @threshold_block.call[:context].presence || default_context : @context
|
60
72
|
end
|
61
73
|
|
62
74
|
def _rate
|
63
|
-
@block_given ? @threshold_block.call[:rate] : @rate
|
75
|
+
@block_given ? @threshold_block.call[:rate].presence || default_rate : @rate
|
64
76
|
end
|
65
77
|
|
66
78
|
def _interval
|
67
|
-
@block_given ? @threshold_block.call[:interval] : @interval
|
68
|
-
end
|
69
|
-
|
70
|
-
# all Consumers that use threshold module shared the same redis instance
|
71
|
-
def _redis
|
72
|
-
@@redis ||= Redis.new(url: Hutch::Config.get(:ratelimit_redis_url))
|
79
|
+
@block_given ? @threshold_block.call[:interval].presence || default_interval : @interval
|
73
80
|
end
|
74
81
|
end
|
75
82
|
end
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: hutch-schedule
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.7.
|
4
|
+
version: 0.7.2
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Wyatt pan
|
8
8
|
autorequire:
|
9
9
|
bindir: exe
|
10
10
|
cert_chain: []
|
11
|
-
date: 2020-04-
|
11
|
+
date: 2020-04-15 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: hutch
|